Tag : hortonworks-hadoop-installation-using-ambari-blueprints

Automate HDP installation using Ambari Blueprints – Part 6

HDP installation using Ambari Blueprints (Part 6)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

 

In previous post we have seen how to Automate HDP installation with Kerberos authentication on multi node cluster using Ambari Blueprints.

 

In this post, we will see how to deploy multi-node node HDP Cluster with Resource Manager HA via Ambari blueprint.

 

Below are simple steps to install HDP multi node cluster with Resource Manager HA using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmap.json(cluster creation template) file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster. This is also called as cluster is creation template as per Apache Ambari documentation.

{
 "blueprint" : "hdptest",
 "default_password" : "hadoop",
 "host_groups" :[
{
 "name" : "blueprint1",
 "hosts" : [
 {
 "fqdn" : "blueprint1.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint2",
 "hosts" : [
 {
 "fqdn" : "blueprint2.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint3",
 "hosts" : [
 {
 "fqdn" : "blueprint3.crazyadmins.com"
 }
 ]
 }
 ]
}

 

3.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components

{
 "configurations" : [
 {
 "core-site": {
 "properties" : {
 "fs.defaultFS" : "hdfs://%HOSTGROUP::blueprint1%:8020"
 }}
 },{
 "yarn-site" : {
 "properties" : {
 "hadoop.registry.rm.enabled" : "false",
 "hadoop.registry.zk.quorum" : "%HOSTGROUP::blueprint3%:2181,%HOSTGROUP::blueprint2%:2181,%HOSTGROUP::blueprint1%:2181",
 "yarn.log.server.url" : "http://%HOSTGROUP::blueprint3%:19888/jobhistory/logs",
 "yarn.resourcemanager.address" : "%HOSTGROUP::blueprint2%:8050",
 "yarn.resourcemanager.admin.address" : "%HOSTGROUP::blueprint2%:8141",
 "yarn.resourcemanager.cluster-id" : "yarn-cluster",
 "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
 "yarn.resourcemanager.ha.enabled" : "true",
 "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
 "yarn.resourcemanager.hostname" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm1" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm2" : "%HOSTGROUP::blueprint3%",
 "yarn.resourcemanager.webapp.address.rm1" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.address.rm2" : "%HOSTGROUP::blueprint3%:8088",
 "yarn.resourcemanager.recovery.enabled" : "true",
 "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::blueprint2%:8025",
 "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::blueprint2%:8030",
 "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
 "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::blueprint2%:8090",
 "yarn.timeline-service.address" : "%HOSTGROUP::blueprint3%:10200",
 "yarn.timeline-service.webapp.address" : "%HOSTGROUP::blueprint3%:8188",
 "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::blueprint3%:8190"
 }
 }
 }
],
 "host_groups" : [
{
 "name" : "blueprint1",
 "components" : [
{
 "name" : "NAMENODE"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint2",
 "components" : [
{
 "name" : "SECONDARY_NAMENODE"
},
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint3",
 "components" : [
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "APP_TIMELINE_SERVER"
},
{
 "name" : "HISTORYSERVER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
}
 ],
 "Blueprints" : {
 "blueprint_name" : "hdptest",
 "stack_name" : "HDP",
 "stack_version" : "2.5"
 }
}

Note – I have kept Resource Managers on blueprint1 and blueprint2, you can change it according to your requirement.

 

Step 4: Create an internal repository map

 

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.5.3.0",
"verify_base_url":true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

 

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.21",
"verify_base_url":true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

Please feel free to comment if you need any further help on this. Happy Hadooping!!  :)

 

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 1

Blogpost after long time :) okay, in this post we will see how to Automate HDP installation using Ambari Blueprints

 

What are Ambari Blueprints ?

Ambari Blueprints are definition of your HDP cluster in “JSON” format, it contents information about all the hosts in your cluster, their components, mapping of stack components with each hosts or hostgroups and other cool stuff. Using Blueprints we can call Ambari APIs to completely automate HDP installation process. Interesting stuff, isn’t it ?

Lets get started with single node cluster installation. Below are the steps to setup single-node HDP cluster with Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

3.1 Create hostmapping.json file as shown below:

{
  "blueprint" : "single-node-hdp-cluster",
  "default_password" : "admin",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "<fqdn-of-single-node-cluster-machine>"
        }
      ]
    }
  ]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
  "configurations" : [ ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "Blueprints" : {
    "blueprint_name" : "single-node-hdp-cluster",
    "stack_name" : "HDP",
    "stack_version" : "2.3"
  }
}

 

Step 4: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-hostname>:8080/api/v1/blueprints/<blueprint-name> -d @cluster_configuration.json

 

Srep 6: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-host>:8080/api/v1/clusters/<new-cluster-name> -d @hostmapping.json

 

Step 7: We can track installation status by below REST call or we can check the same from ambari UI

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/<request-number>

 

Thank you for your time! In next part we will see installation of HDP multinode cluster using Ambari Blueprints.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather