python 连接 kerberos 认证的 kafka
规划
ip | 主机名 | 用途 |
---|---|---|
192.168.1.61 | master | Kerberos Server kafka |
192.168.1.62 | master | Kerberos Client |
192.168.1.63 | master | Kerberos Client |
Kerberos Client 根据需要进行安装,安装后可以使用 kadmin 命令;对应在 Kerberos Server 上使用 kadmin.local 命令。
一、安装kafka
1、下载kafka
从Apache官网 下载kafka(地址:http://kafka.apache.org/downloads 或者国内镜像源 https://mirror.bit.edu.cn/apache/kafka/2.8.2/ )的稳定版本,我们根据之Zookeeper版本,下载kafka_2.12-2.5.0.tgz(由于Kafka使用Scala和Java编写,2.12指Scala版本号,2.5.0指Kafka版本号 )
在centos01 节点中,切换到目录 /opt/softwares/ 中,并进入到该目录中,先下载,然后解压到目录/opt/modules/
$ cd /opt/softwares/
# $ wget https://mirror.bit.edu.cn/apache/kafka/2.8.2/kafka_2.12-2.8.2.tgz
$ wget https://archive.apache.org/dist/kafka/2.5.0/kafka_2.12-2.5.0.tgz
$ tar -zxvf kafka_2.12-2.5.0.tgz -C /opt/modules/
2、编写配置文件
切换目录到安装目录下,安装目录名字 kafka_2.12-2.5.0
cd /opt/modules/kafka_2.12-2.5.0
二、2.Kerbersoe安装
三台机器信息如下:
192.168.1.61 master KDC Server kafka1.abc.com
192.168.1.62 node1 Client kafka2.abc.com
192.168.1.63 node2 Client kafka3.abc.com
2.1.服务器安装
在KDC Server安装服务端
安装:
yum -y install krb5-server krb5-libs krb5-workstation
安装打印:
[root@master1 modules]# yum -y install krb5-server krb5-libs krb5-workstation
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 6.3 kB 00:00:00
* base: mirrors.aliyun.com
* epel: hkg.mirror.rackspace.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
hashicorp | 1.4 kB 00:00:00
kubernetes/signature | 844 B 00:00:00
kubernetes/signature | 1.4 kB 00:00:00 !!!
updates | 2.9 kB 00:00:00
Package krb5-server-1.15.1-54.el7_9.x86_64 already installed and latest version
Package krb5-libs-1.15.1-54.el7_9.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package krb5-workstation.x86_64 0:1.15.1-54.el7_9 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================
Installing:
krb5-workstation x86_64 1.15.1-54.el7_9 updates 821 k
Transaction Summary
=================================================================================================================================
Install 1 Package
Total download size: 821 k
Installed size: 2.5 M
Downloading packages:
krb5-workstation-1.15.1-54.el7_9.x86_64.rpm | 821 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : krb5-workstation-1.15.1-54.el7_9.x86_64 1/1
Verifying : krb5-workstation-1.15.1-54.el7_9.x86_64 1/1
Installed:
krb5-workstation.x86_64 0:1.15.1-54.el7_9
Complete!
会生成 kerberos配置文件
cd /var/kerberos/krb5kdc/
[root@master1 kerberos]# pwd
/var/kerberos
[root@master1 kerberos]# ls -l
total 0
drwxr-xr-x. 3 root root 18 Jun 28 23:31 krb5
drwxr-xr-x. 2 root root 146 Nov 13 17:43 krb5kdc
[root@master1 kerberos]# cd krb5kdc/
[root@master1 krb5kdc]# ls -l
total 24
-rw-------. 1 root root 18 Nov 13 17:38 kadm5.acl
-rw-------. 1 root root 447 Nov 13 17:42 kdc.conf
-rw-------. 1 root root 8192 Nov 13 17:55 principal
-rw-------. 1 root root 8192 Nov 13 17:40 principal.kadm5
-rw-------. 1 root root 0 Nov 13 17:40 principal.kadm5.lock
-rw-------. 1 root root 0 Nov 13 17:55 principal.ok
[root@master1 krb5kdc]#
2.2.修改配置文件
2.2.1. krb5.conf
根据需要修改 /etc/krb5.conf:
把 krb5.conf
复制至其它客户端机器(192.168.1.62、192.168.1.63)
[root@master1 etc]# cat /etc/krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
default_realm = ABC.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
ABC.COM = {
kdc = 192.168.1.61
admin_server = 192.168.1.61
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
[root@master1 etc]#
相关参数说明:
[logging]:日志的位置
[libdefaults]:每种连接的默认配置
dns_lookup_realm:是否通过 dns 查找需使用的 releam
ticket_lifetime:凭证的有效时限,一般为 24 小时
renew_lifetime:凭证最长可以被延期的时限,一般为一周。当凭证过期之后,对安全认证的服务后续访问就会失败
forwardable:ticket 是否可以被转发(如果用户已经有了一个TGT,当他登入到另一个远程系统,KDC会为他重新创建一个TGT,而不需要让用户重新进行身份认证)
rdns:如果为 true,则除根据 hostname 正向查找外,同时反向查找对应的 principal。如果 dns_canonicalize_hostname 设置为 false,则此标志不起作用。默认值为 true。
pkinit_anchors:受信任锚(根)证书的位置;如果用户在命令行上指定X509_anchors,则不使用该配置。
default_realm:默认的 realm,必须跟要配置的 realm 名称一致
default_ccache_name:指定默认凭据缓存的名称。默认值为 DEFCCNAME
[realms]:列举使用的 realm
kdc:kdc 运行的机器
admin_server:kdc 数据库管理服务运行的机器
[domain_realm]:配置 domain name 或 hostname 对应的 releam
详细说明可参考官网文档:http://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html。
2.2.2. kadm5.acl
根据需要修改 /var/kerberos/krb5kdc/kadm5.acl:
[root@master1 etc]# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@ABC.COM *
配置说明
Kadm5.acl文件域名要跟 /etc/krb5.conf.d/
中的 realms 配置节一致
2.2.3.初始化KDC数据库
执行如下命令:
kdb5_util create -s -r ABC.COM
-s:表示生成 stash file,并在其中存储 master server key(krb5kdc)
-r:指定 realm name
Kerberos 数据库的目录为:/var/kerberos/krb5kdc
,如果需要重建数据库,可删除改目录。
2.3、启停 Kerberos 服务
启动:
systemctl start krb5kdc
systemctl start kadmin
停止:
systemctl stop krb5kdc
systemctl stop kadmin
查看状态:
systemctl status krb5kdc
systemctl status kadmin
状态打印:
[root@master1 krb5kdc]# systemctl status krb5kdc
● krb5kdc.service - Kerberos 5 KDC
Loaded: loaded (/usr/lib/systemd/system/krb5kdc.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2022-11-13 17:45:14 CST; 6h ago
Process: 23120 ExecStart=/usr/sbin/krb5kdc -P /var/run/krb5kdc.pid $KRB5KDC_ARGS (code=exited, status=0/SUCCESS)
Main PID: 23121 (krb5kdc)
Tasks: 1
Memory: 2.9M
CGroup: /system.slice/krb5kdc.service
└─23121 /usr/sbin/krb5kdc -P /var/run/krb5kdc.pid
Nov 13 17:45:14 master1 systemd[1]: Starting Kerberos 5 KDC...
Nov 13 17:45:14 master1 systemd[1]: PID file /var/run/krb5kdc.pid not readable (yet?) after start.
Nov 13 17:45:14 master1 systemd[1]: Started Kerberos 5 KDC.
[root@master1 krb5kdc]# systemctl status kadmin
● kadmin.service - Kerberos 5 Password-changing and Administration
Loaded: loaded (/usr/lib/systemd/system/kadmin.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2022-11-13 17:45:44 CST; 6h ago
Process: 23660 ExecStart=/usr/sbin/_kadmind -P /var/run/kadmind.pid $KADMIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 23661 (kadmind)
Tasks: 1
Memory: 11.0M
CGroup: /system.slice/kadmin.service
└─23661 /usr/sbin/kadmind -P /var/run/kadmind.pid
Nov 13 17:45:44 master1 systemd[1]: Starting Kerberos 5 Password-changing and Administration...
Nov 13 17:45:44 master1 systemd[1]: Started Kerberos 5 Password-changing and Administration.
[root@master1 krb5kdc]#
2.4、kadmin.local
Kerberos 服务机器上可以使用 kadmin.local 来执行各种管理的操作。进入 kadmin.local:
[root@master1 krb5kdc]# kadmin.local
Authenticating as principal root/admin@ABC.COM with password.
kadmin.local:
这里先创建一个 principal,方便 Kerberos Client 登录 kadmin:
[root@master1 krb5kdc]# kadmin.local
Authenticating as principal root/admin@ABC.COM with password.
kadmin.local:
kadmin.local: add_principal root/admin@ABC.COM
WARNING: no policy specified for root/admin@ABC.COM; defaulting to no policy
Enter password for principal "root/admin@ABC.COM":
Re-enter password for principal "root/admin@ABC.COM":
add_principal: Principal or policy already exists while creating "root/admin@ABC.COM".
kadmin.local:
输入两次密码(123456),创建成功。
2.5.client安装
在每台节点机上执行(192.168.1.62和192.168.1.63)
yum -y install krb5-workstation
2.5.1、配置 krb5.conf
从 192.168.1.61 上拷贝 /etc/krb5.conf
并覆盖本地的 /etc/krb5.conf
。
[root@master1 krb5kdc]# scp -r /etc/krb5.conf root@192.168.1.62:/etc/krb5.conf
[root@master1 krb5kdc]# scp -r /etc/krb5.conf root@192.168.1.63:/etc/krb5.conf
执行打印:
[root@master1 krb5kdc]# scp -r /etc/krb5.conf root@192.168.1.62:/etc/krb5.conf
root@192.168.1.62's password:
krb5.conf 100% 616 1.1MB/s 00:00
[root@master1 krb5kdc]# scp -r /etc/krb5.conf root@192.168.1.63:/etc/krb5.conf
The authenticity of host '192.168.1.63 (192.168.1.63)' can't be established.
ECDSA key fingerprint is SHA256:Bd2Oaw0jieAGg7L6mr4MBYho9xoYczmg1OrsDgIA4rs.
ECDSA key fingerprint is MD5:3e:a4:55:7e:d7:d5:91:21:c8:41:bc:bb:2c:77:d1:cf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.63' (ECDSA) to the list of known hosts.
root@192.168.1.63's password:
krb5.conf 100% 616 1.4MB/s 00:00
[root@master1 krb5kdc]#
2.5.2、kadmin
Kerberos 客户端机器上可以使用 kadmin 来执行各种管理的操作。需先在 Kerbers Server 上创建登录的 principal,默认为 {当前用户}/admin@realm;
[root@node1 ~]# kadmin
Authenticating as principal root/admin@ABC.COM with password.
Password for root/admin@ABC.COM:
kadmin:
输入密码:123456,登录成功。kadmin 的操作和 kadmin.local 类似。
2.5.3、kinit(在客户端认证用户)
[root@node1 ~]# kinit root/admin@ABC.COM
Password for root/admin@ABC.COM: (输入密码123456)
[root@node1 ~]#
输入密码认证成功。
2.5.4、klist(查看当前的认证用户)
[root@node1 ~]# klist
Ticket cache: KEYRING:persistent:0:0
Default principal: root/admin@ABC.COM
Valid starting Expires Service principal
11/14/2022 00:52:30 11/15/2022 00:52:30 krbtgt/ABC.COM@ABC.COM
[root@node1 ~]#
2.5.5、kdestroy(删除当前的认证缓存)
[root@node1 ~]# kdestroy
2.6.生成密钥文件(keytab)
在安装 Kerberos (192.168.1.61)的机器上进入 kadmin(Kerberos 服务端上使用 kadmin.local,安装了 Kerberos Client 的机器上可以使用 kadmin),然后执行如下命令分别创建服务端和客户端的 keytab:
kafka的生成随机密码
# addprinc -randkey kafka/hrxjb1.tcloudata.com@TCLOUDATA.COM
# kadmin.local -q 'addprinc -randkey kafka/kafka.abc.com@ABC.COM'
addprinc -randkey kafka/kafka1.abc.com@ABC.COM
addprinc -randkey kafka/kafka2.abc.com@ABC.COM
addprinc -randkey kafka/kafka3.abc.com@ABC.COM
打印:
[root@master1 ~]# kadmin.local
Authenticating as principal root/admin@ABC.COM with password.
kadmin.local: addprinc -randkey kafka/kafka1.abc.com@ABC.COM
WARNING: no policy specified for kafka/kafka1.abc.com@ABC.COM; defaulting to no policy
Principal "kafka/kafka1.abc.com@ABC.COM" created.
kadmin.local: addprinc -randkey kafka/kafka2.abc.com@ABC.COM
WARNING: no policy specified for kafka/kafka2.abc.com@ABC.COM; defaulting to no policy
Principal "kafka/kafka2.abc.com@ABC.COM" created.
kadmin.local: addprinc -randkey kafka/kafka3.abc.com@ABC.COM
WARNING: no policy specified for kafka/kafka3.abc.com@ABC.COM; defaulting to no policy
Principal "kafka/kafka3.abc.com@ABC.COM" created.
kadmin.local:
加入至密钥文件,需要提前创建目录:
[root@master1 ~]# mkdir -p /etc/security/keytabs/
然后加入密钥:
#xst -k /tmp/kafka.keytab -norandkey kafka/kafka1.abc.com@ABC.COM
# ktadd -k /etc/security/keytabs/kafka1.keytab kafka/kafka1.abc.com@ABC.COM
xst -k /etc/security/keytabs/kafka1.keytab -norandkey kafka/kafka1.abc.com@ABC.COM
xst -k /etc/security/keytabs/kafka2.keytab -norandkey kafka/kafka2.abc.com@ABC.COM
xst -k /etc/security/keytabs/kafka3.keytab -norandkey kafka/kafka3.abc.com@ABC.COM
执行命令打印:
kadmin.local: ktadd -k /etc/security/keytabs/kafka1.keytab kafka/kafka1.abc.com@ABC.COM
kadmin.local: Key table file '/etc/security/keytabs/kafka1.keytab' not found while adding key to keytab
kadmin.local: xst -k /etc/security/keytabs/kafka1.keytab -norandkey kafka/kafka1.abc.com@ABC.COM
kadmin.local: Key table file '/etc/security/keytabs/kafka1.keytab' not found while adding key to keytab
kadmin.local: xst -k /etc/security/keytabs/kafka1.keytab -norandkey kafka/kafka1.abc.com@ABC.COM
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
Entry for principal kafka/kafka1.abc.com@ABC.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/etc/security/keytabs/kafka1.keytab.
kadmin.local:
执行addprinc命令,如果没指定-randkey或-nokey参数,需要设置密码
执行xst导出命令,如果没有使用-norandkey,会导致密码被随机重置
利用klist查看添加的用户:
klist -ket /etc/security/keytabs/kafka1.keytab
klist -ket /etc/security/keytabs/kafka1.keytab
klist -ket /etc/security/keytabs/kafka1.keytab
打印:
[root@master1 keytabs]# klist -ket /etc/security/keytabs/kafka1.keytab
Keytab name: FILE:/etc/security/keytabs/kafka1.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (aes256-cts-hmac-sha1-96)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (aes128-cts-hmac-sha1-96)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (des3-cbc-sha1)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (arcfour-hmac)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (camellia256-cts-cmac)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (camellia128-cts-cmac)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (des-hmac-sha1)
2 11/14/2022 02:30:47 kafka/kafka1.abc.com@ABC.COM (des-cbc-md5)
[root@master1 keytabs]#
把 /etc/security/keytabs/kafka_server.keytab
拷贝至客户端对应的目录,需要再另外两台机器创建好目录:mkdir -p /etc/security/keytabs/
;
cp -r /etc/security/keytabs/kafka1.keytab /etc/security/keytabs/kafka.keytab
scp -r /etc/security/keytabs/kafka2.keytab root@192.168.1.62:/etc/security/keytabs/kafka.keytab
scp -r /etc/security/keytabs/kafka3.keytab root@192.168.1.63:/etc/security/keytabs/kafka.keytab
三、zookeeper启动
[root@master1 bin]# cd /opt/modules/kafka_2.12-2.5.0/bin
[root@master1 bin]# ls -l
配置对外端口为:2182
vim /opt/modules/kafka_2.12-2.5.0/config/zookeeper.properties
clientPort=2182
创建zookeeper 使用的 zookeeper.jaas
三台机器的 zookeeper.jaas
,要注意principal与 /etc/security/keytabs/kafka.keytab
里面的用户相对应
以192.168.1.61为例
vim /opt/modules/kafka_2.12-2.5.0/config/zookeeper.jaas
Server{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka.keytab"
#这里要注意每台机器的/etc/security/keytabs/kafka.keytab是否包含如下用户名
principal="zookeeper/kafka1.abc.com@ABC.COM"
userTicketCache=false;
};
zookeeper配置文件
加入如下配置:
vim /opt/modules/kafka_2.12-2.5.0/config/zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
修改zookeeper启动脚本
在zookeeper启动脚本中加入
/opt/modules/kafka_2.12-2.5.0/bin/zookeeper-server-start.sh
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/modules/kafka_2.12-2.5.0/config/zookeeper.jaas"
四、配置kafka
1.创建kafka 的kafka.jaas
vim /opt/modules/kafka_2.12-2.5.0/config/kafka.jaas
KafkaServer{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="/etc/security/keytabs/kafka.keytab"
principal="kafka/kafka1.abc.com@ABC.COM";
};
KafkaClient{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="/etc/security/keytabs/kafka.keytab"
principal="kafka/kafka1.abc.com@ABC.COM"
userTicketCache=true;
};
Client{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="/etc/security/keytabs/kafka.keytab"
principal="kafka/kafka1.abc.com@ABC.COM"
userTicketCache=true;
};
配置kafka的 server.properties
在server.properties文件中加入
vim /opt/modules/kafka_2.12-2.5.0/config/server.properties
示例:
zookeeper.connect=hrxjb1.tcloudata.com:2182,hrxjb2.tcloudata.com:2182,hrxjb3.tcloudata.com:2182
listeners=SASL_PLAINTEXT://192.168.1.96:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
真实修改:
zookeeper.connect=master1:2182
listeners=SASL_PLAINTEXT://192.168.1.61:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
修改kafa启动脚本
kafka启动脚本中加入
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/modules/kafka_2.12-2.5.0/config/kafka.jaas"
启动zookeeper
/opt/modules/kafka_2.12-2.5.0/bin/zookeeper-server-start.sh -daemon /opt/modules/kafka_2.12-2.5.0/config/zookeeper.properties
启动kafka
/opt/modules/kafka_2.12-2.5.0/bin/kafka-server-start.sh -daemon /opt/modules/kafka_2.12-2.5.0/config/server.properties
执行命令:
[root@master1 config]# /opt/modules/kafka_2.12-2.5.0/bin/zookeeper-server-start.sh -daemon /opt/modules/kafka_2.12-2.5.0/config/zookeeper.properties
[root@master1 config]# /opt/modules/kafka_2.12-2.5.0/bin/kafka-server-start.sh -daemon /opt/modules/kafka_2.12-2.5.0/config/server.properties
[root@master1 config]#
测试kafka
创建topic
在/opt/modules/kafka_2.12-2.5.0/bin/kafka-topics.sh
/opt/modules/kafka_2.12-2.5.0/bin/kafka-console-consumer.sh
/opt/modules/kafka_2.12-2.5.0/bin/kafka-console-producer.sh
加入
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/modules/kafka_2.12-2.5.0/config/kafka.jaas"
/opt/modules/kafka_2.12-2.5.0/bin/kafka-topics.sh --create --zookeeper master1:2182 --replication-factor 1 --partitions 2 --topic testkrb
报错了:
[root@master1 config]# /opt/modules/kafka_2.12-2.5.0/bin/kafka-topics.sh --create --zookeeper master1:2182 --replication-factor 1 --partitions 2 --topic testkrb
[2022-11-15 00:11:06,210] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-11-15 00:11:06,210] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. [Caused by java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]] (org.apache.zookeeper.ClientCnxn)
[2022-11-15 00:11:06,211] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
Error while executing topic command : KeeperErrorCode = AuthFailed for /brokers/ids
[2022-11-15 00:11:06,243] ERROR org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/ids
at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:557)
at kafka.zk.KafkaZkClient.getChildren(KafkaZkClient.scala:721)
at kafka.zk.KafkaZkClient.getSortedBrokerList(KafkaZkClient.scala:457)
at kafka.zk.KafkaZkClient.getAllBrokersInCluster(KafkaZkClient.scala:406)
at kafka.zk.AdminZkClient.getBrokerMetadatas(AdminZkClient.scala:68)
at kafka.zk.AdminZkClient.createTopic(AdminZkClient.scala:55)
at kafka.admin.TopicCommand$ZookeeperTopicService.createTopic(TopicCommand.scala:353)
at kafka.admin.TopicCommand$TopicService.createTopic(TopicCommand.scala:196)
at kafka.admin.TopicCommand$TopicService.createTopic$(TopicCommand.scala:191)
at kafka.admin.TopicCommand$ZookeeperTopicService.createTopic(TopicCommand.scala:345)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:62)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
(kafka.admin.TopicCommand$)
[root@master1 config]#
=============================================
python对接认证的kafka
使用python-kafka连接带有kerberos认证的kafka集群的时候,由于kerberos没有集成ldap,无法直接使用账号密码登录。记录下
参考文章:https://blog.csdn.net/u013153465/article/details/126017116
参考了网上各种乱七八糟的文章,代码大致都如图所示:
producer = KafkaProducer(bootstrap_servers=['master.cloud.com:9092'],
security_protocol='SASL_PLAINTEXT',
sasl_mechanism='GSSAPI',
sasl_kerberos_service_name='kafka',
value_serializer=lambda m: json.dumps(m).encode('utf-8'))
for i in range(50):
producer.send('python_mfa_topic', {"序列x": i})
producer.flush()
producer:
import os
from krbticket import KrbConfig, KrbCommand
from conf import config
from kafka import KafkaProducer
import json
jaas_conf = os.path.join(config.project_server_file, 'keytab/kafka/kafka_jaas.conf')
krb5_conf = os.path.join(config.project_server_file, 'keytab/kafka/krb5.conf')
user_name = 'hdfs'
keytab_conf = os.path.join(config.project_server_file, f'keytab/kafka/{user_name}.keytab')
try:
# 建议加上,如果有多个使用默认缓存的脚本,则一个脚本 (kinit/kdestroy) 的操作可能会影响其他脚本。
# 使用该KRB5CCNAME变量为每个脚本设置一个缓存文件
os.environ['KRB5CCNAME'] = os.path.join(config.project_server_file, f'keytab/kafka/krb5cc_{user_name}')
kconfig = KrbConfig(principal='hdfs/hdfs@HDFS.COM',
keytab=keytab_conf)
KrbCommand.kinit(kconfig)
os.environ['KAFKA_OPTS'] = f'-Djava.security.auth.login.config={jaas_conf}' \
f' -Djava.security.krb5.conf={krb5_conf}'
producer = KafkaProducer(bootstrap_servers=['master.cloud.com:9092', 'slave1.cloud.com:9092'],
security_protocol='SASL_PLAINTEXT',
sasl_mechanism='GSSAPI',
sasl_kerberos_service_name='kafka',
value_serializer=lambda m: json.dumps(m).encode('utf-8'))
for i in range(50):
producer.send('python_mfa_topic', {"序列x": i})
producer.flush()
finally:
KrbCommand.kdestroy(kconfig)
print("destory完成")
consumer:
import os
from krbticket import KrbConfig, KrbCommand
from conf import config
from kafka import KafkaProducer
import json
jaas_conf = os.path.join(config.project_server_file, 'keytab/kafka/kafka_jaas.conf')
krb5_conf = os.path.join(config.project_server_file, 'keytab/kafka/krb5.conf')
user_name = 'hdfs'
keytab_conf = os.path.join(config.project_server_file, f'keytab/kafka/{user_name}.keytab')
try:
# 建议加上,如果有多个使用默认缓存的脚本,则一个脚本 (kinit/kdestroy) 的操作可能会影响其他脚本。
# 使用该KRB5CCNAME变量为每个脚本设置一个缓存文件
os.environ['KRB5CCNAME'] = os.path.join(config.project_server_file, f'keytab/kafka/krb5cc_{user_name}')
kconfig = KrbConfig(principal='hdfs/hdfs@HDFS.COM',
keytab=keytab_conf)
KrbCommand.kinit(kconfig)
os.environ['KAFKA_OPTS'] = f'-Djava.security.auth.login.config={jaas_conf}' \
f' -Djava.security.krb5.conf={krb5_conf}'
consumer = KafkaConsumer('python_mfa_topic',
bootstrap_servers=['master.cloud.com:9092', 'slave1.cloud.com:9092'],
security_protocol='SASL_PLAINTEXT',
sasl_mechanism='GSSAPI',
auto_offset_reset='earliest',
group_id='python_mfa_group',
sasl_kerberos_service_name='kafka',
value_deserializer=lambda m: json.loads(m.decode('utf-8')))
for message in consumer:
print(message.value)
finally:
KrbCommand.kdestroy(kconfig)
print("destory完成")
相关文章:
Kerberos 入门实战(1)--Kerberos 基本原理
Kerberos 入门实战(2)--Kerberos 安装及使用
Kafka系列(3)--Kafka开启Kerberos认证
Kafka 入门实战(4)--开启 Kerberos 认证
python连接kerberos认证的kafka
大数据Hadoop之——Kafka鉴权认证(Kafka kerberos认证+kafka账号密码认证+CDH Kerberos认证)
kafka启用Kerberos认证
使用Python2访问Kerberos环境下的Kafka
为者常成,行者常至
自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)