3.3 在$ZOOKEEPER_HOME/conf/目录下创建java.env配置文件,添加如下内容:
export JVMFLAGS="-Djava.security.auth.login.config=$ZOOKEEPER_HOME/conf/jaas.conf"
3.4 重启zookeeper服务即可
3.5 zookeeper客户端连接
./zkCli.sh -server 主机名:2181
四、HDFS配置Kerberos认证
4.1 配置$HADOOP_HOME/etc/hadoop/core-site.xml文件,在原来文件基础上添加如下内容:
hadoop.security.authentication
kerberos
hadoop.security.authorization
true
hadoop.rpc.protection
authentication
hadoop.http.authentication.type
kerberos
4.2 配置$HADOOP_HOME/etc/hadoop/hdfs-site.xml文件,在原来文件基础上添加如下内容:
dfs.block.access.token.enable
true
dfs.namenode.keytab.file
/etc/security/keytab/hadoop.keytab
dfs.namenode.kerberos.principal
hadoop/_HOST@HADOOP.COM
dfs.web.authentication.kerberos.principal
HTTP/_HOST@HADOOP.COM
dfs.web.authentication.kerberos.keytab
/etc/security/keytab/hadoop.keytab
dfs.webhdfs.enabled
true
dfs.http.policy
HTTPS_ONLY
dfs.namenode.https-address
0.0.0.0:50070
dfs.permissions.supergroup
hadoop
The name of the group of super-users.
dfs.datanode.keytab.file
/etc/security/keytab/hadoop.keytab
dfs.datanode.kerberos.principal
hadoop/_HOST@HADOOP.COM
dfs.datanode.data.dir.perm
700
dfs.datanode.address
0.0.0.0:50010
dfs.datanode.http.address
0.0.0.0:50075
dfs.data.transfer.protection
integrity
dfs.journalnode.keytab.file
/etc/security/keytab/hadoop.keytab
dfs.journalnode.kerberos.principal
hadoop/_HOST@HADOOP.COM
dfs.journalnode.kerberos.internal.spnego.principal
${
dfs.web.authentication.kerberos.principal}
dfs.journalnode.http-address
0.0.0.0:8480
4.3.Hadoop集群安装HTTPS服务
安装说明:生成CA证书hdfs_ca_key和hdfs_ca_cert只需要在任意一台节点上完成即可,其他每个节点包括生成证书的节点都需要执行第四步以后的操作,且必须使用root用户执行以下操作
1).在ha01节点生成CA证书,需要输入两次密码,其中CN:中国简称;ST:省份;L:城市;O和OU:公司或个人域名;ha01是生成CA证书主机名
openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj /C=CN/ST=shanxi/L=xian/O=hlk/OU=hlk/CN=ha01
2).将ha01节点上生成的CA证书hdfs_ca_key、hdfs_ca_cert分发到每个节点上的/tmp目录下