Tag : ambari-blueprint-documentation

Automate HDP installation using Ambari Blueprints – Part 6

HDP installation using Ambari Blueprints (Part 6)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

 

In previous post we have seen how to Automate HDP installation with Kerberos authentication on multi node cluster using Ambari Blueprints.

 

In this post, we will see how to deploy multi-node node HDP Cluster with Resource Manager HA via Ambari blueprint.

 

Below are simple steps to install HDP multi node cluster with Resource Manager HA using internal repository via Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

 

3.1 Create hostmap.json(cluster creation template) file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster. This is also called as cluster is creation template as per Apache Ambari documentation.

{
 "blueprint" : "hdptest",
 "default_password" : "hadoop",
 "host_groups" :[
{
 "name" : "blueprint1",
 "hosts" : [
 {
 "fqdn" : "blueprint1.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint2",
 "hosts" : [
 {
 "fqdn" : "blueprint2.crazyadmins.com"
 }
 ]
 },
{
 "name" : "blueprint3",
 "hosts" : [
 {
 "fqdn" : "blueprint3.crazyadmins.com"
 }
 ]
 }
 ]
}

 

3.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components

{
 "configurations" : [
 {
 "core-site": {
 "properties" : {
 "fs.defaultFS" : "hdfs://%HOSTGROUP::blueprint1%:8020"
 }}
 },{
 "yarn-site" : {
 "properties" : {
 "hadoop.registry.rm.enabled" : "false",
 "hadoop.registry.zk.quorum" : "%HOSTGROUP::blueprint3%:2181,%HOSTGROUP::blueprint2%:2181,%HOSTGROUP::blueprint1%:2181",
 "yarn.log.server.url" : "http://%HOSTGROUP::blueprint3%:19888/jobhistory/logs",
 "yarn.resourcemanager.address" : "%HOSTGROUP::blueprint2%:8050",
 "yarn.resourcemanager.admin.address" : "%HOSTGROUP::blueprint2%:8141",
 "yarn.resourcemanager.cluster-id" : "yarn-cluster",
 "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
 "yarn.resourcemanager.ha.enabled" : "true",
 "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
 "yarn.resourcemanager.hostname" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm1" : "%HOSTGROUP::blueprint2%",
 "yarn.resourcemanager.hostname.rm2" : "%HOSTGROUP::blueprint3%",
 "yarn.resourcemanager.webapp.address.rm1" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.address.rm2" : "%HOSTGROUP::blueprint3%:8088",
 "yarn.resourcemanager.recovery.enabled" : "true",
 "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::blueprint2%:8025",
 "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::blueprint2%:8030",
 "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
 "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::blueprint2%:8088",
 "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::blueprint2%:8090",
 "yarn.timeline-service.address" : "%HOSTGROUP::blueprint3%:10200",
 "yarn.timeline-service.webapp.address" : "%HOSTGROUP::blueprint3%:8188",
 "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::blueprint3%:8190"
 }
 }
 }
],
 "host_groups" : [
{
 "name" : "blueprint1",
 "components" : [
{
 "name" : "NAMENODE"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint2",
 "components" : [
{
 "name" : "SECONDARY_NAMENODE"
},
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
},
{
 "name" : "blueprint3",
 "components" : [
{
 "name" : "RESOURCEMANAGER"
},
{
 "name" : "APP_TIMELINE_SERVER"
},
{
 "name" : "HISTORYSERVER"
},
{
 "name" : "NODEMANAGER"
},
{
 "name" : "DATANODE"
},
{
 "name" : "ZOOKEEPER_CLIENT"
},
{
 "name" : "ZOOKEEPER_SERVER"
},
{
 "name" : "HDFS_CLIENT"
},
{
 "name" : "YARN_CLIENT"
},
{
 "name" : "MAPREDUCE2_CLIENT"
}
 ],
 "cardinality" : 1
}
 ],
 "Blueprints" : {
 "blueprint_name" : "hdptest",
 "stack_name" : "HDP",
 "stack_version" : "2.5"
 }
}

Note – I have kept Resource Managers on blueprint1 and blueprint2, you can change it according to your requirement.

 

Step 4: Create an internal repository map

 

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.5.3.0",
"verify_base_url":true
}
}

 

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

 

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.21",
"verify_base_url":true
}
}

 

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

Please feel free to comment if you need any further help on this. Happy Hadooping!!  :)

 

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 5

HDP installation using Ambari Blueprints (Part 5)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

 

How to deploy HDP cluster with Kerberos authentication using Ambari Blueprint? 

 

You are at correct place! :) Please follow my below article on HCC to setup single node HDP cluster using Ambari Blueprint with Kerberos Authentication(MIT KDC)

https://community.hortonworks.com/articles/78969/automate-hdp-installation-using-ambari-blueprints-4.html

 

 

Please refer Next Part for Automated HDP installation using Ambari blueprint with Resource Manager high availability.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 4

HDP installation using Ambari Blueprints (Part 4)

HDP installation using Ambari Blueprints

HDP installation using Ambari Blueprints

How to deploy HDP cluster with Kerberos authentication using Ambari Blueprint? 

You are at correct place! :) Please follow my below article on HCC to setup single node HDP cluster using Ambari Blueprint with Kerberos Authentication(MIT KDC)

https://community.hortonworks.com/articles/70189/automate-hdp-installation-using-ambari-blueprints-3.html

 

Please refer Next Part for Automated HDP installation using Ambari blueprint with Kerberos authentication for multi-node cluster.

 

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automated Kerberos Installation and Configuration

Automated Kerberos Installation and Configuration – For this post, I have written a shell script which uses Ambari APIs to configure Kerberos on HDP Single or Multinode clusters. You just need to clone our github repository and modify property file according to your cluster environment, execute setup script and phew!! Within 5-10 minutes you should have your cluster completely secured by Kerberos! Cool isn’t it? :)

 

Detailed Steps(Demo on HDP Sandbox 2.4):

 

1. Clone our github repository on your local machine or one of the node in your Hadoop Cluster.

git clone https://github.com/crazyadmins/useful-scripts.git

Sample Output:

[root@sandbox ~]# git clone https://github.com/crazyadmins/useful-scripts.git
Initialized empty Git repository in /root/useful-scripts/.git/
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 29 (delta 4), reused 25 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), done.

 

2. Goto useful-scripts/ambari directory

[root@sandbox ~]# cd useful-scripts/ambari/
[root@sandbox ambari]# ls -lrt
total 16
-rw-r--r-- 1 root root 5701 2016-04-23 20:33 setup_kerberos.sh
-rw-r--r-- 1 root root 748 2016-04-23 20:33 README
-rw-r--r-- 1 root root 366 2016-04-23 20:33 ambari.props
[root@sandbox ambari]#

 

3. Copy setup_kerberos.sh and ambari.props to the host where you want to setup KDC Server

 

4. Edit and modify ambari.props file according to your cluster environment

Sample output for my Sandbox

[root@sandbox ambari]# cat ambari.props
CLUSTER_NAME=Sandbox
AMBARI_ADMIN_USER=admin
AMBARI_ADMIN_PASSWORD=admin
AMBARI_HOST=sandbox.hortonworks.com
KDC_HOST=sandbox.hortonworks.com
REALM=HWX.COM
KERBEROS_CLIENTS=sandbox.hortonworks.com
##### Notes #####
#1. KERBEROS_CLIENTS - Comma separated list of Kerberos clients in case of multinode cluster
#2. Admin princial is admin/admin and password is hadoop
[root@sandbox ambari]#

 

5. Start installation by simply executing setup_kerberos.sh

Notes:

1. Please run setup_kerberos.sh from KDC_HOST only, you don’t need to setup or configure KDC, this script will do everything for you.

2. If you are running script on Sandbox then please turn OFF maintenance mode for HDFS and turn ON maintenance mode for Zepplin Notebook before executing the script.

sh setup_kerberos.sh

 

Screenshots:

1. Before Script Execution
Automated Kerberos Installation and Configuration

 

 

2. Script execution is in progress

Automated Kerberos Installation and Configuration

 

3. Script finished

Automated Kerberos Installation and Configuration

 

 

4. Ambari UI shows Kerberos is enabled.

Automated Kerberos Installation and Configuration

 

 

Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! :)

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Automate HDP installation using Ambari Blueprints – Part 1

Blogpost after long time :) okay, in this post we will see how to Automate HDP installation using Ambari Blueprints

 

What are Ambari Blueprints ?

Ambari Blueprints are definition of your HDP cluster in “JSON” format, it contents information about all the hosts in your cluster, their components, mapping of stack components with each hosts or hostgroups and other cool stuff. Using Blueprints we can call Ambari APIs to completely automate HDP installation process. Interesting stuff, isn’t it ?

Lets get started with single node cluster installation. Below are the steps to setup single-node HDP cluster with Ambari Blueprints.

 

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html

 

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

 

Step 3: Configure blueprints

Please follow below steps to create Blueprints

3.1 Create hostmapping.json file as shown below:

{
  "blueprint" : "single-node-hdp-cluster",
  "default_password" : "admin",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "<fqdn-of-single-node-cluster-machine>"
        }
      ]
    }
  ]
}

 

3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components

{
  "configurations" : [ ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "Blueprints" : {
    "blueprint_name" : "single-node-hdp-cluster",
    "stack_name" : "HDP",
    "stack_version" : "2.3"
  }
}

 

Step 4: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-hostname>:8080/api/v1/blueprints/<blueprint-name> -d @cluster_configuration.json

 

Srep 6: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-host>:8080/api/v1/clusters/<new-cluster-name> -d @hostmapping.json

 

Step 7: We can track installation status by below REST call or we can check the same from ambari UI

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/

 

curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/<request-number>

 

Thank you for your time! In next part we will see installation of HDP multinode cluster using Ambari Blueprints.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather