Tag : hadoop-maintenance

Hadoop Cluster Maintenance

As a Hadoop Admin it’s our responsibility to perform¬†Hadoop Cluster Maintenance frequently. Let’s see what we can do to keep our big elephant happy! ūüėČ

 

happy-elephant

 

1. FileSystem Checks

We should check health of HDFS periodically by running fsck command

sudo -u hdfs hadoop fsck /

 

This command contacts the Namenode and checks each file recursively which comes under the provided path. Below is the sample output of fsck command

sudo -u hdfs hadoop fsck /
FSCK started by hdfs (auth:SIMPLE) from /10.0.2.15 for path / at Wed Apr 06 18:47:37 UTC 2016
Total size: 1842803118 B
Total dirs: 4612
Total files: 11123
Total symlinks: 0 (Files currently being written: 4)
Total blocks (validated): 11109 (avg. block size 165883 B) (Total open file blocks (not validated): 1)
Minimally replicated blocks: 11109 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 11109 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 22232 (66.680664 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Wed Apr 06 18:46:54 UTC 2016 in 1126 milliseconds

The filesystem under path '/' is HEALTHY

We can schedule a weekly cron job on edge node which will run fsck and send the output via email to Hadoop Admin.

 

2. HDFS Balancer utility

Over the period of time data becomes un-balanced across all the Datanodes in the cluster, this could be because of maintenance activity on specific Datanode, power failure, hardware failures, kernel panic, unexpected reboots etc. In this case because of data locality, Datanodes which are having more data will get churned and ultimately un-balanced cluster can directly affect your MapReduce job performance.

You can use below command to run hdfs balancer

sudo -u hdfs hdfs balancer -threshold <threshold-value>

By default threshold value is 10, we can reduce it upto 1 ( It’s better to run balancer with lowest threshold )

Sample output:

[root@sandbox ~]# sudo -u hdfs hdfs balancer -threshold 1
16/04/06 18:57:16 INFO balancer.Balancer: Using a threshold of 1.0
16/04/06 18:57:16 INFO balancer.Balancer: namenodes = [hdfs://sandbox.hortonworks.com:8020]
16/04/06 18:57:16 INFO balancer.Balancer: parameters = Balancer.Parameters [BalancingPolicy.Node, threshold = 1.0, max idle iteration = 5, #excluded nodes = 0, #included nodes = 0, #source nodes = 0, run during upgrade = false]
16/04/06 18:57:16 INFO balancer.Balancer: included nodes = []
16/04/06 18:57:16 INFO balancer.Balancer: excluded nodes = []
16/04/06 18:57:16 INFO balancer.Balancer: source nodes = []
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
16/04/06 18:57:17 INFO balancer.KeyManager: Block token params received from NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
16/04/06 18:57:17 INFO block.BlockTokenSecretManager: Setting block keys
16/04/06 18:57:17 INFO balancer.KeyManager: Update block keys every 2hrs, 30mins, 0sec
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.movedWinWidth = 5400000 (default=5400000)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000 (default=1000)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.dispatcherThreads = 200 (default=200)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.getBlocks.size = 2147483648 (default=2147483648)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.getBlocks.min-block-size = 10485760 (default=10485760)
16/04/06 18:57:17 INFO block.BlockTokenSecretManager: Setting block keys
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.max-size-to-move = 10737418240 (default=10737418240)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.blocksize = 134217728 (default=134217728)
16/04/06 18:57:17 INFO net.NetworkTopology: Adding a new node: /default-rack/10.0.2.15:50010
16/04/06 18:57:17 INFO balancer.Balancer: 0 over-utilized: []
16/04/06 18:57:17 INFO balancer.Balancer: 0 underutilized: []
The cluster is balanced. Exiting...
Apr 6, 2016 6:57:17 PM 0 0 B 0 B -1 B
Apr 6, 2016 6:57:17 PM Balancing took 1.383 seconds

We can schedule a weekly cron job on edge node which will run balancer and send the results via email to Hadoop Admin.

 

3. Adding new nodes to the cluster

We should always maintain the list of Datanodes which are authorized to communicate with Namenode, it can be achieved by setting dfs.hosts property in hdfs-site.xml

<property>
  <name>dfs.hosts</name>  
  <value>/etc/hadoop/conf/allowed-datanodes.txt</value>
</property>

If we don’t set this property then any machine which has Datanode installed and hdfs-site.xml property file can easily contact Namenode and become part¬†of Hadoop cluster.

 

3.1 For Nodemanagers

We can add below property in yarn-site.xml

<property>
  <name>yarn.resourcemanager.nodes.include-path</name>  
  <value>/etc/hadoop/conf/allowed-nodemanagers.txt</value>
</property>

 

4. Decommissioning a node from the cluster

It’s a bad idea to stop single or multiple Datanode daemons or shutdown them gracefully though HDFS is fault tolerant. Better solution is to¬†add ip address of Datanode machine that we need to remove from cluster to exclude file which is maintained by dfs.hosts.exclude property and run below command

sudo -u hdfs hdfs dfsadmin -refreshNodes

After this, Namenode will start replicating all the blocks¬†to other existing Datanodes in the cluster, once decommission process is complete then it’s safe to shutdown Datanode daemon. You can¬†track progress of decommission process on NN Web UI.

 

4.1 For YARN:

Add ip address of node manager machine to the file maintained by yarn.resourcemanager.nodes.exclude-path property and run below command

sudo -u yarn yarn rmadmin -refreshNodes

 

5. Datanode Volume Failures

Namenode WebUI shows information about Datanode volume failures, we should check this information periodically or set some kind of automated monitoring system using Nagios or Ambari Metrics if you are using Hortonworks Hadoop Distribution or JMX monitoring (http://<namenode-host>:50070/jmx) etc. Multiple disk failures on single Datanode could cause shutdown of Datanode daemon. ( Please check dfs.datanode.failed.volumes.tolerated property and set it accordingly in hdfs-site.xml )

 

6. Database Backups

If we you have multiple Hadoop ecosystem components installed then you should schedule a backup script to take database dumps.

for e.g.

1. hive metastore database

2. oozie-db

3. ambari db

4. ranger db

Create a simple shell script to have backup commands and schedule it on a weekend, add a logic to send an email once backups are done.

 

7. HDFS Metadata backup

fsimage has metadata about your Hadoop file system and if for some reason it gets corrupted then your cluster is un-usable, it’s¬†¬†very important to keep periodic backups of filesystem fsimage.

You can schedule a shell script which will have below command to take backup of fsimage

hdfs dfsadmin -fetchImage fsimage.backup.ddmmyyyy

 

 

8. Purging older log files

In production clusters,¬†if we don’t clean older Hadoop log files then it can¬†eat your entire disk and daemons could crash because of “no space left on device” error. Always get older log files cleaned via cleanup script!

 

Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather