Configure Kerberos Authentication in Hortonworks Hadoop HDP 2.2
This is quick and short tutorial to install and configure Kerberos authentication in hortonworks Hadoop cluster hdp2.2.
Here is my setup environment:
Kerberos Server: kerberos.crazyadmins.com
Kerberos Client: myclient.crazyadmins.com
Test Hadoop Hortonworks 2.2 Cluster: myclient.crazyadmins.com
Prerequisites:
Please ensure that Kerberos server and Client/Hadoop cluster should have each other’s entry in /etc/hosts file and they should be ping-able to each other.
Let’s get started!
Step 1: Install krb server packages on Kerberos Server
On kerberos.crazyadmins.com execute below command:
yum –y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
Step 2: Edit /etc/krb5.conf and change the default REALM
Edit “/etc/krb5.conf” on kerberos.crazyadmins.com
It should look like below:
[root@kerberos ~]# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = crazyadmins.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] crazyadmins.com = { kdc = kerberos.crazyadmins.com admin_server = kerberos.crazyadmins.com } [domain_realm] .kerberos.crazyadmins.com = crazyadmins.com kerberos.crazyadmins.com = crazyadmins.com
Note – crazyadmins.com is my default realm
Step 3: Create Kerberos database
Run below command to create db on kerberos.crazyadmins.com
/usr/sbin/kdb5_util create -s
Step 4: Start the Core Kerberos services
Execute below commands on kerberos.crazyadmins.com
/etc/rc.d/init.d/krb5kdc start
/etc/rc.d/init.d/kadmin start
Step 5: Install and configure Kerberos Client
Use below command to install kerberos client on myclient.crazyadmins.com (Client machine)
yum install krb5-workstation
Note: Please copy modified krb5.conf obtained from step 2 to myclient.crazyadmins.com (Kerberos client and Hadoop cluster)
Step 6: Create the principals by following automated method
6.1 Go to Ambari server admin UI –> Admin –> Security –> Enable Security –> Enter your realm instead of EXAMPLE.COM (here we have used crazyadmins.com)
6.2 Then Click Next –> Download CSV files containing list of nodes, principals & keytabs.
6.3 Then Go to Ambari server and execute below commands:
6.4 /var/lib/ambari-server/resources/scripts/keytabs.sh host-principal-keytab-list.csv > keytabs-generate.sh
6.5 Copy the generated keytabs-generate.sh to your Kerberos server. (Copy keytabs-generate.sh from myclient.crazyadmins.com to kerberos.crazyadmins.com)
cp keytabs-generate.sh kerberos.crazyadmins.com:~
6.6 Run keytabs-generate.sh with sudo. This creates a tar file for each node/host in your Hadoop cluster. Each tar contains the keytabs needed to be on that host.
6.7 Copy each tar file to the right host and unzip it to the root directory (it already contains the correct directory structure).
Note – Please ensure that your keytab files are there at correct location on Kerberos i.e. /etc/security/keytabs
Step 7: Please set permissions of your keytab files by running below script.
Note – If you are using multi-node cluster then you need to run this script on each host. Please ignore errors if you get file not found.
Create permissions.sh (or give any favorite name to your script) on your home directory, copy all the below contents in it and run it on all the kerberos client machines.
chown root:hadoop /etc/security/keytabs chmod 750 /etc/security/keytabs chown ambari:ambari /etc/security/keytabs/ambari.keytab chmod 400 /etc/security/keytabs/ambari.keytab chown hdfs:hadoop /etc/security/keytabs/nn.service.keytab chmod 400 /etc/security/keytabs/nn.service.keytab chown root:hadoop /etc/security/keytabs/spnego.service.keytab chmod 440 /etc/security/keytabs/spnego.service.keytab chown ambari-qa:hadoop /etc/security/keytabs/smokeuser.headless.keytab chmod 440 /etc/security/keytabs/smokeuser.headless.keytab chown hdfs:hadoop /etc/security/keytabs/hdfs.headless.keytab chmod 440 /etc/security/keytabs/hdfs.headless.keytab chown hbase:hadoop /etc/security/keytabs/hbase.headless.keytab chmod 440 /etc/security/keytabs/hbase.headless.keytab chown hdfs:hadoop /etc/security/keytabs/dn.service.keytab chmod 400 /etc/security/keytabs/dn.service.keytab chown mapred:hadoop /etc/security/keytabs/jhs.service.keytab chmod 400 /etc/security/keytabs/jhs.service.keytab chown root:hadoop /etc/security/keytabs/spnego.service.keytab chmod 440 /etc/security/keytabs/spnego.service.keytab chown yarn:hadoop /etc/security/keytabs/rm.service.keytab chmod 400 /etc/security/keytabs/rm.service.keytab chown yarn:hadoop /etc/security/keytabs/nm.service.keytab chmod 400 /etc/security/keytabs/nm.service.keytab chown oozie:hadoop /etc/security/keytabs/oozie.service.keytab chmod 400 /etc/security/keytabs/oozie.service.keytab chown root:hadoop /etc/security/keytabs/spnego.service.keytab chmod 440 /etc/security/keytabs/spnego.service.keytab chown hive:hadoop /etc/security/keytabs/hive.service.keytab chmod 400 /etc/security/keytabs/hive.service.keytab chown root:hadoop /etc/security/keytabs/spnego.service.keytab chmod 440 /etc/security/keytabs/spnego.service.keytab chown hbase:hadoop /etc/security/keytabs/hbase.service.keytab chmod 400 /etc/security/keytabs/hbase.service.keytab chown zookeeper:hadoop /etc/security/keytabs/zk.service.keytab chmod 400 /etc/security/keytabs/zk.service.keytab chown nagios:nagios /etc/security/keytabs/nagios.service.keytab chmod 400 /etc/security/keytabs/nagios.service.keytab chown hdfs:hadoop /etc/security/keytabs/jn.service.keytab chmod 400 /etc/security/keytabs/jn.service.keytab
Step 8: Verify that the correct keytab files and principals are associated with the correct service using the klist command. For example, on the NameNode:
klist –k -t /etc/security/keytabs/nn.service.keytab
Step 8: Click apply in Ambari server to apply the security settings.
Step 9: If zookeeper does not start then check this out http://spryinc.com/blog/configuring-kerberos-security-hortonworks-data-platform-20 (Hadoop / Ambari configuration, part 2 section)
Step 10: Once your services are started, try running some Hadoop command by root user
[kuldeepk@myclient ~]# hadoop fs -ls / ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "myclient.crazyadmins.com/10.200.100.212"; destination host is: "myclient.crazyadmins.com":8020;
You got an error and Yes! It’s expected because root user does not have any valid TGT!
Step 11: Add principal for root user and get a ticket granting ticket
Run below commands on Kerberos server and remember password.
[root@kerberos ~]# kadmin.local kadmin.local: addprinc kuldeepk@crazyadmins.com WARNING: no policy specified for kuldeepk@crazyadmins.com; defaulting to no policy Enter password for principal "kuldeepk@crazyadmins.com": Re-enter password for principal "kuldeepk@crazyadmins.com": Principal "kuldeepk@crazyadmins.com" created. kadmin.local:
Step12: Initiate a TGT and enjoy hadooping 
On Kerberos client run below command & enter password to get a TGT
[kuldeepk@myclient ~]$ kinit kuldeepk Password for kuldeepk@crazyadmins.com:
Verify your ticket by klist command
[kuldeepk@myclient ~]$ klist Ticket cache: FILE:/tmp/krb5cc_1003 Default principal: kuldeepk@crazyadmins.com Valid starting Expires Service principal 04/30/15 22:11:15 05/01/15 22:11:14 krbtgt/crazyadmins.com@crazyadmins.com renew until 04/30/15 22:11:15 [kuldeepk@ myclient ~]$
Please comment below if you have any questions! Your Feedback is appreciated 







