Tag : ambari-blueprints-to-install-hadoop

Automate HDP installation using Ambari Blueprints – Part 6

HDP installation using Ambari Blueprints (Part 6)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

 

In previous post we have seen how to Automate HDP installation with Kerberos authentication on multi node cluster using Ambari Blueprints.

 

In this post, we will see how to deploy multi-node node HDP Cluster with Resource Manager HA via Ambari blueprint.

 

Below are simple steps to install HDP multi node cluster with Resource Manager HA using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmap.json(cluster creation template) file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster. This is also called as cluster is creation template as per Apache Ambari documentation.

{
 "blueprint" : "hdptest",
 "default_password" : "hadoop",
 "host_groups" :[
{
 "name" : "blueprint1",
 "hosts" : [
 {
 "fqdn" : "blueprint1.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint2",
 "hosts" : [
 {
 "fqdn" : "blueprint2.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint3",
 "hosts" : [
 {
 "fqdn" : "blueprint3.crazyadmins.com"
 }
 ]
 }
 ]
}

 

3.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components

{
 "configurations" : [
 {
 "core-site": {
 "properties" : {
 "fs.defaultFS" : "hdfs://%HOSTGROUP::blueprint1%:8020"
 }}
 },{
 "yarn-site" : {
 "properties" : {
 "hadoop.registry.rm.enabled" : "false",
 "hadoop.registry.zk.quorum" : "%HOSTGROUP::blueprint3%:2181,%HOSTGROUP::blueprint2%:2181,%HOSTGROUP::blueprint1%:2181",
 "yarn.log.server.url" : "http://%HOSTGROUP::blueprint3%:19888/jobhistory/logs",
 "yarn.resourcemanager.address" : "%HOSTGROUP::blueprint2%:8050",
 "yarn.resourcemanager.admin.address" : "%HOSTGROUP::blueprint2%:8141",
 "yarn.resourcemanager.cluster-id" : "yarn-cluster",
 "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
 "yarn.resourcemanager.ha.enabled" : "true",
 "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
 "yarn.resourcemanager.hostname" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm1" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm2" : "%HOSTGROUP::blueprint3%",
 "yarn.resourcemanager.webapp.address.rm1" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.address.rm2" : "%HOSTGROUP::blueprint3%:8088",
 "yarn.resourcemanager.recovery.enabled" : "true",
 "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::blueprint2%:8025",
 "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::blueprint2%:8030",
 "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
 "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::blueprint2%:8090",
 "yarn.timeline-service.address" : "%HOSTGROUP::blueprint3%:10200",
 "yarn.timeline-service.webapp.address" : "%HOSTGROUP::blueprint3%:8188",
 "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::blueprint3%:8190"
 }
 }
 }
],
 "host_groups" : [
{
 "name" : "blueprint1",
 "components" : [
{
 "name" : "NAMENODE"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint2",
 "components" : [
{
 "name" : "SECONDARY_NAMENODE"
},
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint3",
 "components" : [
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "APP_TIMELINE_SERVER"
},
{
 "name" : "HISTORYSERVER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
}
 ],
 "Blueprints" : {
 "blueprint_name" : "hdptest",
 "stack_name" : "HDP",
 "stack_version" : "2.5"
 }
}

Note – I have kept Resource Managers on blueprint1 and blueprint2, you can change it according to your requirement.

 

Step 4: Create an internal repository map

 

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.5.3.0",
"verify_base_url":true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

 

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.21",
"verify_base_url":true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

Please feel free to comment if you need any further help on this. Happy Hadooping!!  :)

 

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 5

HDP installation using Ambari Blueprints (Part 5)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

 

How to deploy HDP cluster with Kerberos authentication using Ambari Blueprint? 

 

You are at correct place! :) Please follow my below article on HCC to setup single node HDP cluster using Ambari Blueprint with Kerberos Authentication(MIT KDC)

https://community.hortonworks.com/articles/78969/automate-hdp-installation-using-ambari-blueprints-4.html

 

 

Please refer Next Part for Automated HDP installation using Ambari blueprint with Resource Manager high availability.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 4

HDP installation using Ambari Blueprints (Part 4)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

How to deploy HDP cluster with Kerberos authentication using Ambari Blueprint? 

You are at correct place! :) Please follow my below article on HCC to setup single node HDP cluster using Ambari Blueprint with Kerberos Authentication(MIT KDC)

https://community.hortonworks.com/articles/70189/automate-hdp-installation-using-ambari-blueprints-3.html

 

Please refer Next Part for Automated HDP installation using Ambari blueprint with Kerberos authentication for multi-node cluster.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How to semi-automate deploying dev hdp cluster

Purpose of this article:

When you install HDP for dev/test environment, you would repeat same commands to set up your host OS. To save time, created a BASH script which helps to set up the host OS (Ubuntu only) and docker image (CentOS).

 

What this script does:

  1. Install packages on Ubuntu host OS
  2. Set up docker, such as creating image and spawning containers
  3. [Optional] Set up a local repository for HDP (not Ambari) with Apache2

 

What this script does NOT:

  1. ​As of this writing, this does not install HDP
  2. ​Please use Ambari Blueprint if you would like to automate HDP installation as well.
    http://crazyadmins.com/automate-hdp-installation-using-ambari-blueprints-part-2/
  3. This step is NOT for production environment but would be useful to test HA components

​Host setup steps:

 

Step 1: ​Install Ubuntu 14.x LTS on your VirtualBox/VMware/Azure/AWS.

​It should be easy to deploy Ubuntu VM if you use Azure or AWS.
If you are using VirtualBox/VMWare, you might want to backup Ubuntu installed VM as a template, so that later you can clone.

 

Step 2: Login to Ubuntu and become root (sudo -i)

 

Step 3: Download script using below command

wget https://raw.githubusercontent.com/hajimeo/samples/master/bash/start_hdp.sh -O ./start_hdp.sh && chmod u+x ./start_hdp.sh

 

Step 4: Start the script with Install mode

./start_hdp.sh -i

 

Step 5: Start of an interview 

Script will ask a few questions such as your choice of guest OS, Ambari version, HDP version etc. Normally default values should be OK, so you can just keep pressing Enter key.
NOTE: The end of interview, it asks you to save your answer in a text file. You can reuse this file to skip interview when you install a new cluster.

 

Step 6: Confirm your answers 

After saving your responses, it will ask you “Would you like to start setup this host? [Y]:“. If you answer yes, it starts setting up your Ubuntu host OS. After waiting for while, the scripts finishes, or if there is any error, it stops.

The time would be depending on your choice. If you selected to setup a local repo, downloading repo may take long time.

 

Step 7: Complete the setup

Once the script completed successfully, your choice of Ambari Server should be installed and running on your specified docker container on port 8080.

NOTE: At this moment, docker containers are installed in a private network, so that you would need to do one of followings (“1″ would be the easiest):

Following command creates proxy from your local PC port 18080

ssh -D 18080 username@ubuntu-hostname

Following command do port forwarding from your localhost:8080 to node1:8080

ssh -L 8080:node1.localdomain:8080 username@ubuntu-hostname

Set up proper proxy, such as squid

If you decided to set up a proxy, installing addon such as “​SwitchySharp” would be handy.

  1. Once you confirmed you can use Ambari web interface, please proceed to install HDP.
    If you choose to set up a HDP local repository, please replace “public-repo-1.hortonworks.com” to “dockerhost1.localdomain” (if you used default value)
  2. Private key should be /root/.ssh/id_rsa in any node
  3. Remaining steps should be same as installing normal HDP.
    NOTE: if you decided to install older Ambari version, there is a known issue ​AMBARI-8620

 

Host Start up step

If you shutdown the VM, next time you can just run “./start_hdp.sh -s” which starts up containers, Ambari Server, Ambari Agents and HDP services.​

 

How to semi-automate deploying dev hdp cluster – Did you like this article ? please feel free to send an email to info@crazyadmins.com if you have any further questions on this. Please don’t forget to like our facebook page. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 1

Blogpost after long time :) okay, in this post we will see how to Automate HDP installation using Ambari Blueprints

 

What are Ambari Blueprints ?

Ambari Blueprints are definition of your HDP cluster in “JSON” format, it contents information about all the hosts in your cluster, their components, mapping of stack components with each hosts or hostgroups and other cool stuff. Using Blueprints we can call Ambari APIs to completely automate HDP installation process. Interesting stuff, isn’t it ?

Lets get started with single node cluster installation. Below are the steps to setup single-node HDP cluster with Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

3.1 Create hostmapping.json file as shown below:

{
  "blueprint" : "single-node-hdp-cluster",
  "default_password" : "admin",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "<fqdn-of-single-node-cluster-machine>"
        }
      ]
    }
  ]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
  "configurations" : [ ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "Blueprints" : {
    "blueprint_name" : "single-node-hdp-cluster",
    "stack_name" : "HDP",
    "stack_version" : "2.3"
  }
}

 

Step 4: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-hostname>:8080/api/v1/blueprints/<blueprint-name> -d @cluster_configuration.json

 

Srep 6: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-host>:8080/api/v1/clusters/<new-cluster-name> -d @hostmapping.json

 

Step 7: We can track installation status by below REST call or we can check the same from ambari UI

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/<request-number>

 

Thank you for your time! In next part we will see installation of HDP multinode cluster using Ambari Blueprints.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather