Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

Prasantmahato
3 min readOct 28, 2020

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes .Now, after integrating LVM with hadoop we can overcome many big used cases related to storage.

STEP 1

Created a volume of 5gb

aws ec2 create-volume — size 5 — volume-type “gp2” — availability-zone “ap-south-1b”

Created External EBS of 5 gb

We can confirm it by Graphical Console.

Attached the External EBS (Elastic Block Storage )volume of 5 gb to instance .

aws ec2 attach-volume — device “/dev/sdk” — instance-id i-0e76ee670bf53ca77 — volume-id “vol-030d2b6906fc7ca4a”

Attaching EBS Volume to a instance .

Confirming the volume is attached or not .

By Using fdisk -l

STEP 2

Now in order to create a volume group for the elasticity storage .

We have to first create physical volume (pv)to share the storage to Volume group(vg).

pvcreate /dev/xvdk

Creating a Physical volume

Now we have created a physical volume and we need to attach it to Volume group .

vgcreate arth2020 /dev/xvdk

Creating a volume group

After attaching We are now ready with elasticity volume ,

STEP 3

We have to perform following three steps .

  1. Create a partition ie Logical volume and good thing is we can make as many LV we want .

lvcreate — size 2G — name lv1 arth2020

Creating a logical volume

2. Format the partition .

mkfs.ext4 /dev/arth2020/lv1

Formatting the partition.

3. Mount the partition .

mount /dev/arth2020/lv1 /dn1

Mount the partition

I have successfully created a Elasticity partition which later is to be mounted to the folder (dn1 )which is to be shared in Hadoop Cluster, which provides us with elasticity whenever we want we can increase or decrease the volume according to the requirement .

For any query or suggestions dm me .

Thankyou .

--

--