Tag : how-to-automate-hdp-cluster-installation

Automate HDP installation using Ambari Blueprints – Part 3

In previous post we have seen how to install multi node HDP cluster using Ambari Blueprints. In this post we will see how to Automate HDP installation using Ambari Blueprints to configure Namenode HA.

 

Below are simple steps to install HDP multinode cluster with Namenode HA using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmapping.json file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster.

{
 "blueprint" : "prod",
 "default_password" : "hadoop",
 "host_groups" :[
{
 "name" : "prodnode1",
 "hosts" : [
 {
 "fqdn" : "prodnode1.openstacklocal"
 }
 ]
 },
{
 "name" : "prodnode2",
 "hosts" : [
 {
 "fqdn" : "prodnode2.openstacklocal"
 }
 ]
 },
{
 "name" : "prodnode3",
 "hosts" : [
 {
 "fqdn" : "prodnode3.openstacklocal"
 }
 ]
 }
 ]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
 "configurations" : [
 { "core-site": {
 "properties" : {
 "fs.defaultFS" : "hdfs://prod",
 "ha.zookeeper.quorum" : "%HOSTGROUP::prodnode1%:2181,%HOSTGROUP::prodnode2%:2181,%HOSTGROUP::prodnode3%:2181"
 }}
 },
 { "hdfs-site": {
 "properties" : {
 "dfs.client.failover.proxy.provider.prod" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
 "dfs.ha.automatic-failover.enabled" : "true",
 "dfs.ha.fencing.methods" : "shell(/bin/true)",
 "dfs.ha.namenodes.prod" : "nn1,nn2",
 "dfs.namenode.http-address" : "%HOSTGROUP::prodnode1%:50070",
 "dfs.namenode.http-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50070",
 "dfs.namenode.http-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50070",
 "dfs.namenode.https-address" : "%HOSTGROUP::prodnode1%:50470",
 "dfs.namenode.https-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50470",
 "dfs.namenode.https-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50470",
 "dfs.namenode.rpc-address.prod.nn1" : "%HOSTGROUP::prodnode1%:8020",
 "dfs.namenode.rpc-address.prod.nn2" : "%HOSTGROUP::prodnode3%:8020",
 "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::prodnode1%:8485;%HOSTGROUP::prodnode2%:8485;%HOSTGROUP::prodnode3%:8485/prod",
 "dfs.nameservices" : "prod"
 }}
 }],
 "host_groups" : [
{
 "name" : "prodnode1",
 "components" : [
{
"name" : "NAMENODE"
},
{
 "name" : "JOURNALNODE"
},
{
 "name" : "ZKFC"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "FALCON_CLIENT"
},
{
 "name" : "OOZIE_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
}
],
 "cardinality" : 1
},
{
 "name" : "prodnode2",
 "components" : [
{
 "name" : "JOURNALNODE"
},
{
 "name" : "MYSQL_SERVER"
},
{
 "name" : "HIVE_SERVER"
},
{
 "name" : "HIVE_METASTORE"
},
{
 "name" : "WEBHCAT_SERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "FALCON_SERVER"
},
{
 "name" : "OOZIE_SERVER"
},
{
 "name" : "FALCON_CLIENT"
},
{
 "name" : "OOZIE_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
 "cardinality" : 1
},
{
 "name" : "prodnode3",
 "components" : [
{
"name" : "RESOURCEMANAGER"
},
{
 "name" : "JOURNALNODE"
},
{
 "name" : "ZKFC"
},
{
 "name" : "NAMENODE"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
 "cardinality" : 1
}
 ],
 "Blueprints" : {
 "blueprint_name" : "prod",
 "stack_name" : "HDP",
 "stack_version" : "2.4"
 }
}

Note – I have kept Namenodes on prodnode1 and prodnode3, you can change it according to your requirement. I have added few more services like Hive, Falcon, Oozie etc. You can remove them or add few more according to your requirement.

 

Step 4: Create an internal repository map

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.4.2.0",
"verify_base_url":true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

{
"Repositories" : {
 "base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.20",
 "verify_base_url" : true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

 

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

 

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

 

Please refer Part-4 for setting up HDP with Kerberos authentication via Ambari blueprint.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How to semi-automate deploying dev hdp cluster

Purpose of this article:

When you install HDP for dev/test environment, you would repeat same commands to set up your host OS. To save time, created a BASH script which helps to set up the host OS (Ubuntu only) and docker image (CentOS).

 

What this script does:

  1. Install packages on Ubuntu host OS
  2. Set up docker, such as creating image and spawning containers
  3. [Optional] Set up a local repository for HDP (not Ambari) with Apache2

 

What this script does NOT:

  1. ​As of this writing, this does not install HDP
  2. ​Please use Ambari Blueprint if you would like to automate HDP installation as well.
    http://crazyadmins.com/automate-hdp-installation-using-ambari-blueprints-part-2/
  3. This step is NOT for production environment but would be useful to test HA components

​Host setup steps:

 

Step 1: ​Install Ubuntu 14.x LTS on your VirtualBox/VMware/Azure/AWS.

​It should be easy to deploy Ubuntu VM if you use Azure or AWS.
If you are using VirtualBox/VMWare, you might want to backup Ubuntu installed VM as a template, so that later you can clone.

 

Step 2: Login to Ubuntu and become root (sudo -i)

 

Step 3: Download script using below command

wget https://raw.githubusercontent.com/hajimeo/samples/master/bash/start_hdp.sh -O ./start_hdp.sh && chmod u+x ./start_hdp.sh

 

Step 4: Start the script with Install mode

./start_hdp.sh -i

 

Step 5: Start of an interview 

Script will ask a few questions such as your choice of guest OS, Ambari version, HDP version etc. Normally default values should be OK, so you can just keep pressing Enter key.
NOTE: The end of interview, it asks you to save your answer in a text file. You can reuse this file to skip interview when you install a new cluster.

 

Step 6: Confirm your answers 

After saving your responses, it will ask you “Would you like to start setup this host? [Y]:“. If you answer yes, it starts setting up your Ubuntu host OS. After waiting for while, the scripts finishes, or if there is any error, it stops.

The time would be depending on your choice. If you selected to setup a local repo, downloading repo may take long time.

 

Step 7: Complete the setup

Once the script completed successfully, your choice of Ambari Server should be installed and running on your specified docker container on port 8080.

NOTE: At this moment, docker containers are installed in a private network, so that you would need to do one of followings (“1″ would be the easiest):

Following command creates proxy from your local PC port 18080

ssh -D 18080 username@ubuntu-hostname

Following command do port forwarding from your localhost:8080 to node1:8080

ssh -L 8080:node1.localdomain:8080 username@ubuntu-hostname

Set up proper proxy, such as squid

If you decided to set up a proxy, installing addon such as “​SwitchySharp” would be handy.

  1. Once you confirmed you can use Ambari web interface, please proceed to install HDP.
    If you choose to set up a HDP local repository, please replace “public-repo-1.hortonworks.com” to “dockerhost1.localdomain” (if you used default value)
  2. Private key should be /root/.ssh/id_rsa in any node
  3. Remaining steps should be same as installing normal HDP.
    NOTE: if you decided to install older Ambari version, there is a known issue ​AMBARI-8620

 

Host Start up step

If you shutdown the VM, next time you can just run “./start_hdp.sh -s” which starts up containers, Ambari Server, Ambari Agents and HDP services.​

 

How to semi-automate deploying dev hdp cluster – Did you like this article ? please feel free to send an email to info@crazyadmins.com if you have any further questions on this. Please don’t forget to like our facebook page. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 2

In previous post we have seen how to install single node HDP cluster using Ambari Blueprints. In this post we will see how to Automate HDP installation using Ambari Blueprints. 

 

Below are simple steps to install HDP multinode cluster using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmapping.json file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster.

{
"blueprint" : "multinode-hdp",
"default_password" : "hadoop",
"host_groups" :[
   {
     "name" : "host2",
     "hosts" : [
       {
         "fqdn" : "host2.crazyadmins.com"
       }
     ]
   },
   {
     "name" : "host3",
     "hosts" : [
       {
         "fqdn" : "host3.crazyadmins.com"
       }
     ]
   },
   {
     "name" : "host4",
     "hosts" : [
       {
         "fqdn" : "host4.crazyadmins.com"
       }
     ]
   }
]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
 "configurations": [],
 "host_groups": [{
 "name": "host2",
 "components": [{
 "name": "PIG"
 }, {
 "name": "METRICS_COLLECTOR"
 }, {
 "name": "KAFKA_BROKER"
 }, {
 "name": "HISTORYSERVER"
 }, {
 "name": "HBASE_REGIONSERVER"
 }, {
 "name": "OOZIE_CLIENT"
 }, {
 "name": "HBASE_CLIENT"
 }, {
 "name": "NAMENODE"
 }, {
 "name": "SUPERVISOR"
 }, {
 "name": "HCAT"
 }, {
 "name": "METRICS_MONITOR"
 }, {
 "name": "APP_TIMELINE_SERVER"
 }, {
 "name": "NODEMANAGER"
 }, {
 "name": "HDFS_CLIENT"
 }, {
 "name": "HIVE_CLIENT"
 }, {
 "name": "FLUME_HANDLER"
 }, {
 "name": "DATANODE"
 }, {
 "name": "WEBHCAT_SERVER"
 }, {
 "name": "ZOOKEEPER_CLIENT"
 }, {
 "name": "ZOOKEEPER_SERVER"
 }, {
 "name": "STORM_UI_SERVER"
 }, {
 "name": "HIVE_SERVER"
 }, {
 "name": "FALCON_CLIENT"
 }, {
 "name": "TEZ_CLIENT"
 }, {
 "name": "HIVE_METASTORE"
 }, {
 "name": "SQOOP"
 }, {
 "name": "YARN_CLIENT"
 }, {
 "name": "MAPREDUCE2_CLIENT"
 }, {
 "name": "NIMBUS"
 }, {
 "name": "DRPC_SERVER"
 }],
 "cardinality": "1"
 }, {
 "name": "host3",
 "components": [{
 "name": "ZOOKEEPER_SERVER"
 }, {
 "name": "OOZIE_SERVER"
 }, {
 "name": "SECONDARY_NAMENODE"
 }, {
 "name": "FALCON_SERVER"
 }, {
 "name": "ZOOKEEPER_CLIENT"
 }, {
 "name": "PIG"
 }, {
 "name": "KAFKA_BROKER"
 }, {
 "name": "OOZIE_CLIENT"
 }, {
 "name": "HBASE_REGIONSERVER"
 }, {
 "name": "HBASE_CLIENT"
 }, {
 "name": "HCAT"
 }, {
 "name": "METRICS_MONITOR"
 }, {
 "name": "FALCON_CLIENT"
 }, {
 "name": "TEZ_CLIENT"
 }, {
 "name": "SQOOP"
 }, {
 "name": "HIVE_CLIENT"
 }, {
 "name": "HDFS_CLIENT"
 }, {
 "name": "NODEMANAGER"
 }, {
 "name": "YARN_CLIENT"
 }, {
 "name": "MAPREDUCE2_CLIENT"
 }, {
 "name": "DATANODE"
 }],
 "cardinality": "1"
 }, {
 "name": "host4",
 "components": [{
 "name": "ZOOKEEPER_SERVER"
 }, {
 "name": "ZOOKEEPER_CLIENT"
 }, {
 "name": "PIG"
 }, {
 "name": "KAFKA_BROKER"
 }, {
 "name": "OOZIE_CLIENT"
 }, {
 "name": "HBASE_MASTER"
 }, {
 "name": "HBASE_REGIONSERVER"
 }, {
 "name": "HBASE_CLIENT"
 }, {
 "name": "HCAT"
 }, {
 "name": "RESOURCEMANAGER"
 }, {
 "name": "METRICS_MONITOR"
 }, {
 "name": "FALCON_CLIENT"
 }, {
 "name": "TEZ_CLIENT"
 }, {
 "name": "SQOOP"
 }, {
 "name": "HIVE_CLIENT"
 }, {
 "name": "HDFS_CLIENT"
 }, {
 "name": "NODEMANAGER"
 }, {
 "name": "YARN_CLIENT"
 }, {
 "name": "MAPREDUCE2_CLIENT"
 }, {
 "name": "DATANODE"
 }],
 "cardinality": "1"
 }],
 "Blueprints": {
 "blueprint_name": "multinode-hdp",
 "stack_name": "HDP",
 "stack_version": "2.3"
 }
}

 

Step 4: Create an internal repository map

 

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories" : {
   "base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.3.4.0",
   "verify_base_url" : true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

{
"Repositories" : {
   "base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.20",
   "verify_base_url" : true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

 

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-2.3 -d @repo.json

 

curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

 

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

 

Please feel free to comment or send us an email to info@crazyadmins.com if you need any further help on this. Happy Hadooping!! :)

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather