Limiting storage in Hadoop using partition!

Rohitbhatt
3 min readOct 27, 2020

𝗛𝗲𝗹𝗹𝗼 Everyone

In this article in Hadoop cluster , we will find how to contribute limited/specific amount of storage as a slave node to the cluster ?

so lets get started…

1. First build the Hadoop cluster on AWS Cloud (you can also use your local system ).So , Let’s start the Hadoop services i.e Master and Slave.

so the slave node contributes all its storage to the master. we can check this out by the following command:

df -h

2. we can see that the size of “/” drive is 20Gb. so my entire “/” drive contributes its storage to master .we can check that Data node contributes its complete storage to master by using the command hadoop dfsadmin -report.

3.so we have to minimize the storage . For these we have to use the concept of disk partitions .So , Lets create one EBS Volume of size 10Gb and attach it to the Data Node .

Now we will go to terminal of our instance and verify our volume.

Use fdisk -l command, you will spot your volume.

4.so our volume is finally attach to our instance. now we create partitions .To create the partitions we have command as :

fdisk  device_name

5.Our partition is create . After creating partition , we need to format our partition before using it. To format the partition we have command as :

mkfs.ext4  device_name

6.After formatting the partition we have to mount our Data Node directory to the new partition we have created . To mount the partition we have command as :

mount  device_name  directory_name

7.So We can see that now we have control on Data Node Storage as “/dn1" Data Node Directory is now access the storage that we have provided to it .

As you can see the report of our Hadoop cluster, storage is limited to 10 GB which is equal to the value of size of partition created.

I hope you like this article.

For Furthur Queries, Suggestion’s Feel Free to Connect with me On Linkedin.

www.linkedin.com/in/rohit-bhatt-97499b188.

Thank you!!!!!!!!!

--

--