Tag : hadoop-setup-amazon

Setting up Hortonworks Hadoop cluster in AWS

In this article we will discuss how to set up Hortonworks Hadoop cluster in AWS (Amazon Web Services).

Assuming you have a valid AWS login let us get started with:

  1. Launching an Amazon instance
  2. Pre-requisites for setting up Hadoop cluster in AWS
  3. Hadoop cluster Installation (via Ambari)

 

1. Launching an Amazon instance

 

a) Select EC2 from you Amazon Management Console.

b) Next step is to create the instance. Click on “Launch Instance”

c) We are going to use Centos 6.5 from the AWS Marketplace

 

1

 

d) Select Instance type. For this exercise we will use m3.xlarge (Please select appropriate instance as per your requirement and budget from http://aws.amazon.com/ec2/pricing/)

e) Now configure Instance details. We chose to launch 2 instances and rest all default values as below.

f) Add storage. This will be used for your / volume.

 

4

 

 

 

 

 

 

 

 

 

 

g) Tag your instance

h) Configure security group now. For hadoop we need TCP and ICMP ports and all http ports for various UIs to be open. Thus for this exercise we open all TCP and ICMP ports but this can be restricted to only required ports.

 

6

 

i) ICMP port was added later after launching instance. Yes, this is doable! You can edit the security group settings later as well.

 

11

 

j) Click on “Review and Launch” now.

k) It will ask you to create a new keypair and download it (.pem file) to your local machine. This is used to setup passwordless ssh in hadoop cluster and also required by the management UI.

l) Once your instances are launched, on your EC2 Dashboard you will see the details as below. You can rename you instances here for you reference but these won’t be the hostnames of your instances J

m) Please note down the Public DNS, Public IP, Private IP of your instances since these will be required later.

 

8

 

With this we complete the first part of Launching an Amazon instance successfully! :)

 

2. Pre-requisites for setting up Hadoop cluster in AWS

 

Below listed items are very necessary before we move ahead with setting up Hadoop cluster.

 

Generate .ppk (private key) from the previously download .pem file

1. Use puttygen to do this. Import the .pem file. Generate the keys and save the private key (.ppk).

2. Now connect to putty and use the public DNS/public IP to connect to master host. Don’t forget to import the .ppk (private key) in putty.

3. Login as “root”

 

Password-less SSH access among servers in cluster:

 

1. Use WinSCP utility copy the .pem file to your master host. This is used for passwordless SSH from master to slave nodes for starting services remotely.

2. While connecting from WinSCP to master host – provide the Public DNS/Public IP and username as “root”. Instead of password pass the .ppk for connecting by clicking on the advanced button.

3. Public key for this is already put in ~/.ssh/authorized_keys by AWS while launching your instances.

4. chmod 644 ~/.ssh/authorized_keys

5. chmod 400 ~/.ssh/<your .pem file>

6. Once you are logged in run the below 2 commands on master:

6.1 eval `ssh-agent` (this is tilde sign and not single quote!)

6.2 ssh-add ~/.ssh/<your .pem file>

7. Check now whether you can ssh to you slave node from your master without password. It should work.

8. Please remember this will be lost upon shell exit and you have repeat ssh-agent and ssh-add commands.

 

Change hostname

 

1. Here you can change the hostname to the Public DNS or something you like and is easy to remember. We will give something that is easy to remember because once you stop your EC2 instance, its Public DNS and Public IP changes. So this will cause you extra work of updating hosts file every-time and also disrupt your cluster state on startup every-time.

2. If you wish to grant public access to your host then chose updating hostname to Public DNS/IP.

3. Issue command : hostname <chose your hostname>

4. Repeat above step on all hosts!

 

Update /etc/hosts file

 

1. Edit /etc/hosts using “vi” or any other editor and add the mapping of Private IP and hostname given above. (Private IP is obtained from ifconfig or even on AWS EC2 console-instances details as noted earlier)

2. Update this file to have IP and hostname of all your servers in cluster

3. Repeat above two steps on all hosts!

 

Date/Time should be in sync

 

Check that your master and slave nodes’ time and date are in sync else please configure NTP to do so.

 

Install Java on master

 

yum install java-1.7.0-openjdk

 

Disable selinux

 

1. Ensure that in /etc/selinux/config file, “SELINUX=disabled”

Note : Please Repeat above step on all hosts!

 

Firewall should NOT be running

 

1. service iptables stop

2. chkconfig iptables off (This is to ensure that it does not start again on reboot)

3. Repeat above two steps on all hosts!

 

Install HDP and ambari Repositories

 

1. wget -nv http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo -O /etc/yum.repos.d/HDP.repo

2. wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/ambari.repo -O /etc/yum.repos.d/ambari.repo

3. Repeat above two steps on all hosts!

 

 

Hadoop cluster Installation (via Ambari)

 

We are going to setup Amabri server and then proceed to install Hadoop and other components as required, using the Ambari UI.

 

1. yum install ambari-server

2. ambari-server setup (use defaults whenever asked for input during this setup)

3. ambari-server start     (starting the ambari server)

4. Now you can access the Ambari UI in your browser using the public DNS/Public IP and port 8080

5. Login with username: admin and password: admin (This is default login. You can change this via UI)

6. Launce the cluster and proceed the installation as per your requirement.

7. Remember below things:

7.1 Register hosts with the hostname you have set ( Public DNS/IP or anything other name as provided by you)

7.2 While registering hosts in UI, import/upload the .pem file and not the .ppk otherwise registration of hosts will fail.

7.3 If it fails with error regarding openssl, then please update openssl libraries on all your hosts.

8. After successful deployment, you can now check the various UIs and run some test jobs on your cluster! Good luck :)

And yes, stop the instances once required work is done to avoid unnecessary billing! Enjoy Hadooping! :)

 

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather