Automate HDP installation using Ambari Blueprints – Part 3

In previous post we have seen how to install multi node HDP cluster using Ambari Blueprints. In this post we will see how to Automate HDP installation using Ambari Blueprints to configure Namenode HA.

 

Below are simple steps to install HDP multinode cluster with Namenode HA using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmapping.json file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster.

{
 "blueprint" : "prod",
 "default_password" : "hadoop",
 "host_groups" :[
{
 "name" : "prodnode1",
 "hosts" : [
 {
 "fqdn" : "prodnode1.openstacklocal"
 }
 ]
 },
{
 "name" : "prodnode2",
 "hosts" : [
 {
 "fqdn" : "prodnode2.openstacklocal"
 }
 ]
 },
{
 "name" : "prodnode3",
 "hosts" : [
 {
 "fqdn" : "prodnode3.openstacklocal"
 }
 ]
 }
 ]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
 "configurations" : [
 { "core-site": {
 "properties" : {
 "fs.defaultFS" : "hdfs://prod",
 "ha.zookeeper.quorum" : "%HOSTGROUP::prodnode1%:2181,%HOSTGROUP::prodnode2%:2181,%HOSTGROUP::prodnode3%:2181"
 }}
 },
 { "hdfs-site": {
 "properties" : {
 "dfs.client.failover.proxy.provider.prod" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
 "dfs.ha.automatic-failover.enabled" : "true",
 "dfs.ha.fencing.methods" : "shell(/bin/true)",
 "dfs.ha.namenodes.prod" : "nn1,nn2",
 "dfs.namenode.http-address" : "%HOSTGROUP::prodnode1%:50070",
 "dfs.namenode.http-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50070",
 "dfs.namenode.http-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50070",
 "dfs.namenode.https-address" : "%HOSTGROUP::prodnode1%:50470",
 "dfs.namenode.https-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50470",
 "dfs.namenode.https-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50470",
 "dfs.namenode.rpc-address.prod.nn1" : "%HOSTGROUP::prodnode1%:8020",
 "dfs.namenode.rpc-address.prod.nn2" : "%HOSTGROUP::prodnode3%:8020",
 "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::prodnode1%:8485;%HOSTGROUP::prodnode2%:8485;%HOSTGROUP::prodnode3%:8485/prod",
 "dfs.nameservices" : "prod"
 }}
 }],
 "host_groups" : [
{
 "name" : "prodnode1",
 "components" : [
{
"name" : "NAMENODE"
},
{
 "name" : "JOURNALNODE"
},
{
 "name" : "ZKFC"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "FALCON_CLIENT"
},
{
 "name" : "OOZIE_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
}
],
 "cardinality" : 1
},
{
 "name" : "prodnode2",
 "components" : [
{
 "name" : "JOURNALNODE"
},
{
 "name" : "MYSQL_SERVER"
},
{
 "name" : "HIVE_SERVER"
},
{
 "name" : "HIVE_METASTORE"
},
{
 "name" : "WEBHCAT_SERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "FALCON_SERVER"
},
{
 "name" : "OOZIE_SERVER"
},
{
 "name" : "FALCON_CLIENT"
},
{
 "name" : "OOZIE_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
 "cardinality" : 1
},
{
 "name" : "prodnode3",
 "components" : [
{
"name" : "RESOURCEMANAGER"
},
{
 "name" : "JOURNALNODE"
},
{
 "name" : "ZKFC"
},
{
 "name" : "NAMENODE"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
 "name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
 "cardinality" : 1
}
 ],
 "Blueprints" : {
 "blueprint_name" : "prod",
 "stack_name" : "HDP",
 "stack_version" : "2.4"
 }
}

Note – I have kept Namenodes on prodnode1 and prodnode3, you can change it according to your requirement. I have added few more services like Hive, Falcon, Oozie etc. You can remove them or add few more according to your requirement.

 

Step 4: Create an internal repository map

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.4.2.0",
"verify_base_url":true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

{
"Repositories" : {
 "base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.20",
 "verify_base_url" : true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

 

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

 

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

 

Please refer Part-4 for setting up HDP with Kerberos authentication via Ambari blueprint.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>