Integrating LVM with Hadoop to provide Elasticity to DataNode Storage

Big Data and Elasticity in Storage

--

When we create a cluster we often are limited to a fixed configured capacity.

To make this storage elastic, we make use of LVM.

Create two volumes and attach them to DataNode instance.

Create physical volumes and add them to a volume group.

Using this volume group, create a logical volume, format it and set the mount point to /dn.

We have now changed the size of data node.

To increase this size, we can extend the logical volume using lvextend and then format the unformatted part using resize2fs.

We see that we have extended the size of data node.

Feel free to contact on my LinkedIn.

--

--

No responses yet