Contribute limited/specific amount of storage as slave to the cluster

Big Data, Hadoop — Flexibility of changing sizes

Lets suppose we set up a Hadoop cluster with just one data node

We notice that all of our hard disk capacity has been contributed to the cluster, which is 8GB in this case.

To be able to control the size of this data node we create a new volume and attach it to our data node using AWS CLI.

Stop the data node.

Partition the attached volume, format it and set the mount point to the location of contributed storage. Then start the data node.

Now if we check the web UI of the Hadoop cluster we see that we have successfully changed the size contributed by the data node.

Feel free to contact on my LinkedIn.

--

--

No responses yet