Tag : aws-hadoop-cluster-setup

Tune Hadoop Cluster to get Maximum Performance (Part 1)

I have been working on production Hadoop clusters for a while and have learned many performance tuning tips and tricks. In this blog I will explain how to tune Hadoop Cluster to get maximum performance. Just installing Hadoop for production clusters or to do some development POC does not give expected results, because default Hadoop configuration settings are done keeping in mind the minimum hardware configuration. Its responsibility of Hadoop Administrator to understand the hardware specs like amount of RAM, Total number of CPU Cores, Physical Cores, Virtual Cores, Understand if hyper threading is supported by Processor, NIC Cards, Number of Disks that are mounted on Datanodes etc.

 

tune

 

For Better Understanding I have divided this blog into two main parts.

1. Tune your Hadoop Cluster to get Maximum Performance (Part 1) – In this part I will explain how to tune your operating system in order to get maximum performance for your Hadoop jobs.

2. Tune your Hadoop Cluster to get Maximum Performance (Part 2) – In this part I will explain how to modify your Hadoop configurations parameters so that it should use your hardware very efficiently.

 

How OS tuning will improve performance of Hadoop?

Tuning your Centos6.X/Redhat 6.X can increase performance of Hadoop by 20-25%. Yes! 20-25% :-)

 

Let’s get started and see what parameters we need to change on OS level.

 

1. Turn off the Power savings option in BIOS:

This will increase the overall system performance and Hadoop Performance. You can go to your BIOS Settings and change it to PerfOptimized from power saving mode (this option may be different for your server based on vendor). If you have remote console command line available then you can use racadm commands to check the status and update it. You need to restart the system in order to get your changes in effect.

 

 

2. Open file handles and files:

By default number of open file count is 1024 for each user and if you keep it to default then you may face java.io.FileNotFoundException: (Too many open files) and your job will get failed. In order to avoid this scenario set this number of open file limit to unlimited or some higher number like 32832.

 

Commands:

ulimit –S 4096
ulimit –H 32832

Also, Please set the system wide file descriptors by using below command:

sysctl –w fs.file-max=6544018

As above kernel variable is temporary and we need to make it permanent by adding it to /etc/sysctl.conf. Just edit /etc/sysctl.conf and add below value at the end of it

fs.file-max=6544018

 

 

 

3. FileSystem Type & Reserved Space:

In order to get maximum performance for your Hadoop job, I personally suggest by using ext4 filesystem as it has some advantage over ext3 like multi-block and delayed allocation etc. How you mount your file-system will make difference because if you mount it using default option then there will excessive writes for file or directory access times which we do not need in case of Hadoop. Mount your local disks using option noatime will surely improve your performance by disabling those excessive and unnecessary writes to disks.

Below is the sample of how it should look like:

 

UUID=gfd3f77-6b11-4ba0-8df9-75feb03476efs /disk1                 ext4   noatime       0 0
UUID=524cb4ff-a2a5-4e8b-bd24-0bbfd829a099 /disk2                 ext4   noatime       0 0
UUID=s04877f0-40e0-4f5e-a15b-b9e4b0d94bb6 /disk3                 ext4   noatime       0 0
UUID=3420618c-dd41-4b58-ab23-476b719aaes  /disk4                 ext4   noatime       0 0

 

Note – noatime option will also cover noadirtime so no need to mention that.

 

Many of you must be aware that after formatting your disk partition with ext4 partition, there is 5% space reserved for special operations like 100% disk full so root should be able to delete the files by using this reserved space. In case of Hadoop we don’t need to reserve that 5% space so please get it removed using tune2fs command.

 

Command:

tune2fs -m 0 /dev/sdXY

 

Note – 0 indicates that 0% space is reserved.

 

 

4. Network Parameters Tuning:

 

Network parameters tuning also helps to get performance boost! This is kinda risky stuff because if you are working on remote server and you did a mistake while updating Network parameters then there can be a connectivity loss and you may not be able to connect to that server again unless you correct the configuration mistake by taking IPMI/iDrac/iLo console etc. Modify the parameters only when you know what you are doing :-)

Modifying the net.core.somaxconn to 1024 from default value of 128 will help Hadoop by as this changes will have increased listening queue between the master and slave services so ultimately number of connections between master and slaves will be higher than before.

 

Command to modify net.core.somaxconnection:

sysctl –w net.core.somaxconn=1024

To make above change permanent, simple add below variable value at the end of /etc/sysctl.conf

net.core.somaxconn=1024

 

 

MTU Settings:

Maximum transmission unit. This value indicates the size which can be sent in a packet/frame over TCP. By default MTU is set to 1500 and you can tune it have its value=9000, when value of MTU is greater than its default value then it’s called as Jumbo Frames.

 

Command to change value of MTU:

You need to add MTU=9000 in /etc/sysconfig/network-scripts/ifcfg-eth0 or whatever your eth device name. Restart the network service in order to have this change in effect.

 

Note – Before modifying this value please make sure that all the nodes in your cluster including switches are supported for jumbo frames, if not then *PLEASE DO NOT ATTEMPT THIS*

 

 

5. Transparent Huge Page Compaction:

 

This feature of linux is really helpful to get the better performance for application including Hadoop workloads however one of the subpart of Transparent Huge Pages called Compaction causes issues with Hadoop job(it causes high processor usage while defragmentation of the memory). When I was benchmarking our client’s cluster I have observed some fluctuations ~15% with the output and when I disabled it then that fluctuation was gone. So I recommend to disable it for Hadoop.

 

Command:

echo never > /sys/kernel/mm/redhat_transparent_hugepages/defrag

 

In order to make above change permanent, please add below script in your /etc/rc.local file.

if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag ;fi

 

 

6. Memory Swapping:

For Hadoop Swapping reduces the job performance, you should have maximum data in-memory and tune your OS so that it will do memory swap only when there is situation like OOM (OutOfMemory). To do so we need to set vm.swappiness kernel parameter to 0

 

Command:

sysctl -w vm.swappiness=0

 

Please add below variable in /etc/sysctl.conf to make it persistent.

vm.swappiness=0

 
I hope this information will help someone who is looking for OS level tuning parameters for Hadoop. Please don’t forget to give your feedback via comments or ask questions if any.
Thank you :-) I will publish second part in the next week!

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Setting up Hortonworks Hadoop cluster in AWS

In this article we will discuss how to set up Hortonworks Hadoop cluster in AWS (Amazon Web Services).

Assuming you have a valid AWS login let us get started with:

  1. Launching an Amazon instance
  2. Pre-requisites for setting up Hadoop cluster in AWS
  3. Hadoop cluster Installation (via Ambari)

 

1. Launching an Amazon instance

 

a) Select EC2 from you Amazon Management Console.

b) Next step is to create the instance. Click on “Launch Instance”

c) We are going to use Centos 6.5 from the AWS Marketplace

 

1

 

d) Select Instance type. For this exercise we will use m3.xlarge (Please select appropriate instance as per your requirement and budget from http://aws.amazon.com/ec2/pricing/)

e) Now configure Instance details. We chose to launch 2 instances and rest all default values as below.

f) Add storage. This will be used for your / volume.

 

4

 

 

 

 

 

 

 

 

 

 

g) Tag your instance

h) Configure security group now. For hadoop we need TCP and ICMP ports and all http ports for various UIs to be open. Thus for this exercise we open all TCP and ICMP ports but this can be restricted to only required ports.

 

6

 

i) ICMP port was added later after launching instance. Yes, this is doable! You can edit the security group settings later as well.

 

11

 

j) Click on “Review and Launch” now.

k) It will ask you to create a new keypair and download it (.pem file) to your local machine. This is used to setup passwordless ssh in hadoop cluster and also required by the management UI.

l) Once your instances are launched, on your EC2 Dashboard you will see the details as below. You can rename you instances here for you reference but these won’t be the hostnames of your instances J

m) Please note down the Public DNS, Public IP, Private IP of your instances since these will be required later.

 

8

 

With this we complete the first part of Launching an Amazon instance successfully! :)

 

2. Pre-requisites for setting up Hadoop cluster in AWS

 

Below listed items are very necessary before we move ahead with setting up Hadoop cluster.

 

Generate .ppk (private key) from the previously download .pem file

1. Use puttygen to do this. Import the .pem file. Generate the keys and save the private key (.ppk).

2. Now connect to putty and use the public DNS/public IP to connect to master host. Don’t forget to import the .ppk (private key) in putty.

3. Login as “root”

 

Password-less SSH access among servers in cluster:

 

1. Use WinSCP utility copy the .pem file to your master host. This is used for passwordless SSH from master to slave nodes for starting services remotely.

2. While connecting from WinSCP to master host – provide the Public DNS/Public IP and username as “root”. Instead of password pass the .ppk for connecting by clicking on the advanced button.

3. Public key for this is already put in ~/.ssh/authorized_keys by AWS while launching your instances.

4. chmod 644 ~/.ssh/authorized_keys

5. chmod 400 ~/.ssh/<your .pem file>

6. Once you are logged in run the below 2 commands on master:

6.1 eval `ssh-agent` (this is tilde sign and not single quote!)

6.2 ssh-add ~/.ssh/<your .pem file>

7. Check now whether you can ssh to you slave node from your master without password. It should work.

8. Please remember this will be lost upon shell exit and you have repeat ssh-agent and ssh-add commands.

 

Change hostname

 

1. Here you can change the hostname to the Public DNS or something you like and is easy to remember. We will give something that is easy to remember because once you stop your EC2 instance, its Public DNS and Public IP changes. So this will cause you extra work of updating hosts file every-time and also disrupt your cluster state on startup every-time.

2. If you wish to grant public access to your host then chose updating hostname to Public DNS/IP.

3. Issue command : hostname <chose your hostname>

4. Repeat above step on all hosts!

 

Update /etc/hosts file

 

1. Edit /etc/hosts using “vi” or any other editor and add the mapping of Private IP and hostname given above. (Private IP is obtained from ifconfig or even on AWS EC2 console-instances details as noted earlier)

2. Update this file to have IP and hostname of all your servers in cluster

3. Repeat above two steps on all hosts!

 

Date/Time should be in sync

 

Check that your master and slave nodes’ time and date are in sync else please configure NTP to do so.

 

Install Java on master

 

yum install java-1.7.0-openjdk

 

Disable selinux

 

1. Ensure that in /etc/selinux/config file, “SELINUX=disabled”

Note : Please Repeat above step on all hosts!

 

Firewall should NOT be running

 

1. service iptables stop

2. chkconfig iptables off (This is to ensure that it does not start again on reboot)

3. Repeat above two steps on all hosts!

 

Install HDP and ambari Repositories

 

1. wget -nv http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo -O /etc/yum.repos.d/HDP.repo

2. wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/ambari.repo -O /etc/yum.repos.d/ambari.repo

3. Repeat above two steps on all hosts!

 

 

Hadoop cluster Installation (via Ambari)

 

We are going to setup Amabri server and then proceed to install Hadoop and other components as required, using the Ambari UI.

 

1. yum install ambari-server

2. ambari-server setup (use defaults whenever asked for input during this setup)

3. ambari-server start     (starting the ambari server)

4. Now you can access the Ambari UI in your browser using the public DNS/Public IP and port 8080

5. Login with username: admin and password: admin (This is default login. You can change this via UI)

6. Launce the cluster and proceed the installation as per your requirement.

7. Remember below things:

7.1 Register hosts with the hostname you have set ( Public DNS/IP or anything other name as provided by you)

7.2 While registering hosts in UI, import/upload the .pem file and not the .ppk otherwise registration of hosts will fail.

7.3 If it fails with error regarding openssl, then please update openssl libraries on all your hosts.

8. After successful deployment, you can now check the various UIs and run some test jobs on your cluster! Good luck :)

And yes, stop the instances once required work is done to avoid unnecessary billing! Enjoy Hadooping! :)

 

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather