5.2 配置$HADOOP_HOME/etc/hadoop/mapred-site.xml文件,在原来文件基础上添加如下内容:
mapreduce.jobhistory.keytab
/etc/security/keytab/hadoop.keytab
mapreduce.jobhistory.principal
hadoop/_HOST@HADOOP.COM
5.2.1 分发修改的配置文件至各节点
5.3 配置$HADOOP_HOME/etc/hadoop/container-executor.cfg,将以下内容覆盖掉默认的内容:
#configured value of yarn.nodemanager.linux-container-executor.group
yarn.nodemanager.linux-container-executor.group=hadoop
#comma separated list of users who can not run applications
banned.users=root
#Prevent other super-users
min.user.id=500
#comma separated list of system users who CAN run applications
allowed.system.users=hadoop
ps:注意:该container-executor.cfg文件内不允许有空格或空行,否则会报错!
5.4 配置Yarn使用LinuxContainerExecutor(各节点都需要操作)
1)修改所有节点的container-executor所有者和权限,要求其所有者为root,所有组为hadoop,权限为6050。其默认路径为 $HADOOP_HOME/bin
chown root:hadoop /data/hadoop-3.1.3/bin/container-executor
chmod 6050 /data/hadoop-3.1.3/bin/container-executor
2)修改所有节点的container-executor.cfg文件的所有者和权限,要求该文件及其所有的上级目录的所有者均为root,所有组为hadoop,权限为400。其默认路径为 $HADOOP_HOME/etc/hadoop
chown root:hadoop /data/hadoop-3.1.3/etc/hadoop/container-executor.cfg
chown root:hadoop /data/hadoop-3.1.3/etc/hadoop
chown root:hadoop /data/hadoop-3.1.3/etc
chown root:hadoop /data/hadoop-3.1.3
chown root:hadoop /data
chmod 400 /data/hadoop-3.1.3/etc/hadoop/container-executor.cfg
5.5 启动start-yarn.sh
六、HBASE配置Kerberos认证
6.1 配置$HBASE_HOME/conf/hbase-site.xml文件,在原文件上添加如下内容:
hbase.security.authentication
kerberos