Integrating LVM with Hadoop and providing Elasticity to DataNode Storage
LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes .Now, after integrating LVM with hadoop we can overcome many big used cases related to storage.
STEP 1
Created a volume of 5gb
aws ec2 create-volume — size 5 — volume-type “gp2” — availability-zone “ap-south-1b”
We can confirm it by Graphical Console.
Attached the External EBS (Elastic Block Storage )volume of 5 gb to instance .
aws ec2 attach-volume — device “/dev/sdk” — instance-id i-0e76ee670bf53ca77 — volume-id “vol-030d2b6906fc7ca4a”
Confirming the volume is attached or not .
STEP 2
Now in order to create a volume group for the elasticity storage .
We have to first create physical volume (pv)to share the storage to Volume group(vg).
pvcreate /dev/xvdk
Now we have created a physical volume and we need to attach it to Volume group .
vgcreate arth2020 /dev/xvdk
After attaching We are now ready with elasticity volume ,
STEP 3
We have to perform following three steps .
- Create a partition ie Logical volume and good thing is we can make as many LV we want .
lvcreate — size 2G — name lv1 arth2020
2. Format the partition .
mkfs.ext4 /dev/arth2020/lv1
3. Mount the partition .
mount /dev/arth2020/lv1 /dn1
I have successfully created a Elasticity partition which later is to be mounted to the folder (dn1 )which is to be shared in Hadoop Cluster, which provides us with elasticity whenever we want we can increase or decrease the volume according to the requirement .
For any query or suggestions dm me .
Thankyou .