YARN配置Kerberos认证
為什么80%的碼農(nóng)都做不了架構(gòu)師?>>> ??
關(guān)于 Kerberos 的安裝和 HDFS 配置 kerberos 認(rèn)證,請(qǐng)參考?HDFS配置kerberos認(rèn)證。
1. 環(huán)境說(shuō)明
系統(tǒng)環(huán)境:
- 操作系統(tǒng):CentOs 6.6
- Hadoop版本:CDH5.4
- JDK版本:1.7.0_71
- 運(yùn)行用戶:root
集群各節(jié)點(diǎn)角色規(guī)劃為:
192.168.56.121 cdh1 NameNode、ResourceManager、HBase、Hive metastore、Impala Catalog、Impala statestore、Sentry 192.168.56.122 cdh2 DataNode、SecondaryNameNode、NodeManager、HBase、Hive Server2、Impala Server 192.168.56.123 cdh3 DataNode、HBase、NodeManager、Hive Server2、Impala Servercdh1作為master節(jié)點(diǎn),其他節(jié)點(diǎn)作為slave節(jié)點(diǎn),hostname 請(qǐng)使用小寫(xiě),要不然在集成 kerberos 時(shí)會(huì)出現(xiàn)一些錯(cuò)誤。
2. 生成 keytab
在 cdh1 節(jié)點(diǎn),即 KDC server 節(jié)點(diǎn)上執(zhí)行下面命令:
cd /var/kerberos/krb5kdc/kadmin.local -q "addprinc -randkey yarn/cdh1@JAVACHEN.COM " kadmin.local -q "addprinc -randkey yarn/cdh2@JAVACHEN.COM " kadmin.local -q "addprinc -randkey yarn/cdh3@JAVACHEN.COM "kadmin.local -q "addprinc -randkey mapred/cdh1@JAVACHEN.COM " kadmin.local -q "addprinc -randkey mapred/cdh2@JAVACHEN.COM " kadmin.local -q "addprinc -randkey mapred/cdh3@JAVACHEN.COM "kadmin.local -q "xst -k yarn.keytab yarn/cdh1@JAVACHEN.COM " kadmin.local -q "xst -k yarn.keytab yarn/cdh2@JAVACHEN.COM " kadmin.local -q "xst -k yarn.keytab yarn/cdh3@JAVACHEN.COM "kadmin.local -q "xst -k mapred.keytab mapred/cdh1@JAVACHEN.COM " kadmin.local -q "xst -k mapred.keytab mapred/cdh2@JAVACHEN.COM " kadmin.local -q "xst -k mapred.keytab mapred/cdh3@JAVACHEN.COM "拷貝 yarn.keytab 和 mapred.keytab 文件到其他節(jié)點(diǎn)的?/etc/hadoop/conf?目錄
$ scp yarn.keytab mapred.keytab cdh1:/etc/hadoop/conf $ scp yarn.keytab mapred.keytab cdh2:/etc/hadoop/conf $ scp yarn.keytab mapred.keytab cdh3:/etc/hadoop/conf并設(shè)置權(quán)限,分別在 cdh1、cdh2、cdh3 上執(zhí)行:
$ ssh cdh1 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab" $ ssh cdh2 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab" $ ssh cdh3 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab"由于 keytab 相當(dāng)于有了永久憑證,不需要提供密碼(如果修改 kdc 中的 principal 的密碼,則該 keytab 就會(huì)失效),所以其他用戶如果對(duì)該文件有讀權(quán)限,就可以冒充 keytab 中指定的用戶身份訪問(wèn) hadoop,所以 keytab 文件需要確保只對(duì) owner 有讀權(quán)限(0400)
3. 修改 YARN 配置文件
修改 yarn-site.xml,添加下面配置:
<property><name>yarn.resourcemanager.keytab</name><value>/etc/hadoop/conf/yarn.keytab</value> </property> <property><name>yarn.resourcemanager.principal</name> <value>yarn/_HOST@JAVACHEN.COM</value> </property><property><name>yarn.nodemanager.keytab</name><value>/etc/hadoop/conf/yarn.keytab</value> </property> <property><name>yarn.nodemanager.principal</name> <value>yarn/_HOST@JAVACHEN.COM</value> </property> <property><name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value> </property> <property><name>yarn.nodemanager.linux-container-executor.group</name><value>yarn</value> </property>如果想要 YARN 開(kāi)啟 SSL,則添加:
<property><name>yarn.http.policy</name><value>HTTPS_ONLY</value> </property>修改 mapred-site.xml,添加如下配置:
<property><name>mapreduce.jobhistory.keytab</name><value>/etc/hadoop/conf/mapred.keytab</value> </property> <property><name>mapreduce.jobhistory.principal</name> <value>mapred/_HOST@JAVACHEN.COM</value> </property>如果想要 mapreduce jobhistory 開(kāi)啟 SSL,則添加:
<property><name>mapreduce.jobhistory.http.policy</name><value>HTTPS_ONLY</value> </property>在?/etc/hadoop/conf?目錄下創(chuàng)建 container-executor.cfg 文件,內(nèi)容如下:
#configured value of yarn.nodemanager.linux-container-executor.group yarn.nodemanager.linux-container-executor.group=yarn #comma separated list of users who can not run applications banned.users=bin #Prevent other super-users min.user.id=0 #comma separated list of system users who CAN run applications allowed.system.users=root,nobody,impala,hive,hdfs,yarn設(shè)置該文件權(quán)限:
$ chown root:yarn container-executor.cfg $ chmod 400 container-executor.cfg$ ll container-executor.cfg -r-------- 1 root yarn 354 11-05 14:14 container-executor.cfg注意:
- container-executor.cfg?文件讀寫(xiě)權(quán)限需設(shè)置為?400,所有者為?root:yarn。
- yarn.nodemanager.linux-container-executor.group?要同時(shí)配置在 yarn-site.xml 和 container-executor.cfg,且其值需要為運(yùn)行 NodeManager 的用戶所在的組,這里為 yarn。
- banned.users?不能為空,默認(rèn)值為?hfds,yarn,mapred,bin
- min.user.id?默認(rèn)值為 1000,在有些 centos 系統(tǒng)中,用戶最小 id 為500,則需要修改該值
- 確保?yarn.nodemanager.local-dirs?和?yarn.nodemanager.log-dirs?對(duì)應(yīng)的目錄權(quán)限為?755?。
設(shè)置 /usr/lib/hadoop-yarn/bin/container-executor 讀寫(xiě)權(quán)限為?6050?如下:
$ chown root:yarn /usr/lib/hadoop-yarn/bin/container-executor $ chmod 6050 /usr/lib/hadoop-yarn/bin/container-executor$ ll /usr/lib/hadoop-yarn/bin/container-executor ---Sr-s--- 1 root yarn 333 11-04 19:11 container-executor測(cè)試是否配置正確:
$ /usr/lib/hadoop-yarn/bin/container-executor --checksetup如果提示錯(cuò)誤,則查看 NodeManger 的日志,然后對(duì)照?YARN ONLY: Container-executor Error Codes?查看錯(cuò)誤對(duì)應(yīng)的問(wèn)題說(shuō)明。
關(guān)于 LinuxContainerExecutor 的詳細(xì)說(shuō)明,可以參考?http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor。
記住將修改的上面文件同步到其他節(jié)點(diǎn):cdh2、cdh3,并再次一一檢查權(quán)限是否正確。
$ cd /etc/hadoop/conf/$ scp yarn-site.xml mapred-site.xml container-executor.cfg cdh2:/etc/hadoop/conf/ $ scp yarn-site.xml mapred-site.xml container-executor.cfg cdh3:/etc/hadoop/conf/$ ssh cdh2 "cd /etc/hadoop/conf/; chown root:yarn container-executor.cfg ; chmod 400 container-executor.cfg" $ ssh cdh3 "cd /etc/hadoop/conf/; chown root:yarn container-executor.cfg ; chmod 400 container-executor.cfg"4. 啟動(dòng)服務(wù)
啟動(dòng) ResourceManager
resourcemanager 是通過(guò) yarn 用戶啟動(dòng)的,故在 cdh1 上先獲取 yarn 用戶的 ticket 再啟動(dòng)服務(wù):
$ kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh1@JAVACHEN.COM $ service hadoop-yarn-resourcemanager start然后查看日志,確認(rèn)是否啟動(dòng)成功。
啟動(dòng) NodeManager
resourcemanager 是通過(guò) yarn 用戶啟動(dòng)的,故在 cdh2 和 cdh3 上先獲取 yarn 用戶的 ticket 再啟動(dòng)服務(wù):
$ ssh cdh2 "kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh2@JAVACHEN.COM ;service hadoop-yarn-nodemanager start" $ ssh cdh3 "kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh3@JAVACHEN.COM ;service hadoop-yarn-nodemanager start"啟動(dòng) MapReduce Job History Server
resourcemanager 是通過(guò) mapred 用戶啟動(dòng)的,故在 cdh1 上先獲取 mapred 用戶的 ticket 再啟動(dòng)服務(wù):
$ kinit -k -t /etc/hadoop/conf/mapred.keytab mapred/cdh1@JAVACHEN.COM $ service hadoop-mapreduce-historyserver start5. 測(cè)試
檢查 web 頁(yè)面是否可以訪問(wèn):http://cdh1:8088/cluster
運(yùn)行一個(gè) mapreduce 的例子:
$ klistTicket cache: FILE:/tmp/krb5cc_1002Default principal: yarn/cdh1@JAVACHEN.COMValid starting Expires Service principal11/10/14 11:18:55 11/11/14 11:18:55 krbtgt/cdh1@JAVACHEN.COMrenew until 11/17/14 11:18:55Kerberos 4 ticket cache: /tmp/tkt1002klist: You have no tickets cached$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 10000如果沒(méi)有報(bào)錯(cuò),則說(shuō)明配置成功。最后運(yùn)行的結(jié)果為:
Job Finished in 54.56 seconds Estimated value of Pi is 3.14120000000000000000如果出現(xiàn)下面錯(cuò)誤,請(qǐng)檢查環(huán)境變量中?HADOOP_YARN_HOME?是否設(shè)置正確,并和?yarn.application.classpath?中的保持一致。
14/11/13 11:41:02 INFO mapreduce.Job: Job job_1415849491982_0003 failed with state FAILED due to: Application application_1415849491982_0003 failed 2 times due to AM Container for appattempt_1415849491982_0003_000002 exited with exitCode: 1 due to: Exception from container-launch. Container id: container_1415849491982_0003_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1:at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)at org.apache.hadoop.util.Shell.run(Shell.java:455)at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:281)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)at java.util.concurrent.FutureTask.run(FutureTask.java:138)at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)at java.lang.Thread.run(Thread.java:662)Shell output: main : command provided 1 main : user is yarn main : requested yarn user is yarnContainer exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. 14/11/13 11:41:02 INFO mapreduce.Job: Counters: 0 Job Finished in 13.428 seconds java.io.FileNotFoundException: File does not exist: hdfs://cdh1:8020/user/yarn/QuasiMonteCarlo_1415850045475_708291630/out/reduce-out轉(zhuǎn)載于:https://my.oschina.net/boltwu/blog/825957
總結(jié)
以上是生活随笔為你收集整理的YARN配置Kerberos认证的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: iOS -OC调用js页面
- 下一篇: [OSG]OSG的相关扩展