k8s 集群搭建-05-部署 kubernetes 控制平面

五、部署kubernetes控制平面

这部分我们部署kubernetes的控制平面,每个组件有多个点保证高可用。实例中我们在两个节点上部署 API Server、Scheduler 和 Controller Manager。当然你也可以按照教程部署三个节点的高可用,操作都是一致的。

下面的所有命令都是运行在每个master节点的,我们的实例中是 node-1 和 node-2,即对应 hombd03,hombd04 两个节点。

1. 配置 API Server

先在node-1 上执行,然后再 node-2 上执行;

# 创建kubernetes必要目录
$ mkdir -p /etc/kubernetes/ssl

# 准备证书文件
$ cd ~
$ mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    proxy-client.pem proxy-client-key.pem \
    /etc/kubernetes/ssl

# 配置kube-apiserver.service
# 本机内网ip
# $ IP=10.155.19.223 (删除,示例IP)
$ IP=192.168.1.123   (node-1节点实际IP)
# $ IP=192.168.1.123   (node-2节点实际IP)
# apiserver实例数
$ APISERVER_COUNT=2
# etcd节点
# $ ETCD_ENDPOINTS=(10.155.19.223 10.155.19.64 10.155.19.147)    (删除,示例IP)
$ ETCD_ENDPOINTS=(192.168.1.123 192.168.1.124 192.168.1.125)  
# 创建 apiserver service
$ cat <<EOF > /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${IP} \\
  --allow-privileged=true \\
  --apiserver-count=${APISERVER_COUNT} \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --etcd-servers=https://${ETCD_ENDPOINTS[0]}:2379,https://${ETCD_ENDPOINTS[1]}:2379,https://${ETCD_ENDPOINTS[2]}:2379 \\
  --event-ttl=1h \\
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --service-account-issuer=api \\
  --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \\
  --service-account-signing-key-file=/etc/kubernetes/ssl/service-account-key.pem \\
  --api-audiences=api,vault,factors \\
  --service-cluster-ip-range=192.233.0.0/16 \\
  --service-node-port-range=30000-32767 \\
  --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem \\
  --runtime-config=api/all=true \\
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --requestheader-allowed-names=aggregator \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

这里的 10.233.0.0 是否要修改为 192.233.0.0 ,还要进行验证;

  --service-cluster-ip-range=10.233.0.0/16 \\

    改为:
      --service-cluster-ip-range=192.233.0.0/16 \\

2. 配置kube-controller-manager

先在node-1 上执行,然后再 node-2 上执行;

# 准备kubeconfig配置文件
$ mv kube-controller-manager.kubeconfig /etc/kubernetes/

# 创建 kube-controller-manager.service
$ cat <<EOF > /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=192.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --cluster-signing-duration=876000h0m0s \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem \\
  --service-cluster-ip-range=192.233.0.0/16 \\
  --use-service-account-credentials=true \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

3. 配置kube-scheduler

先在node-1 上执行,然后再 node-2 上执行;

# 准备kubeconfig配置文件
$ mv kube-scheduler.kubeconfig /etc/kubernetes

# 创建 scheduler service 文件
$ cat <<EOF > /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --bind-address=0.0.0.0 \\
  --port=0 \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

4. 启动服务

先在node-1 上执行,然后再 node-2 上执行;

$ systemctl daemon-reload
$ systemctl enable kube-apiserver
$ systemctl enable kube-controller-manager
$ systemctl enable kube-scheduler
$ systemctl restart kube-apiserver
$ systemctl restart kube-controller-manager
$ systemctl restart kube-scheduler

5. 服务验证

端口验证
# 各个组件的监听端口
$ netstat -ntlp
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      6887/etcd
tcp        0      0 10.155.19.223:2379      0.0.0.0:*               LISTEN      6887/etcd
tcp        0      0 10.155.19.223:2380      0.0.0.0:*               LISTEN      6887/etcd
tcp6       0      0 :::6443                 :::*                    LISTEN      4088/kube-apiserver
tcp6       0      0 :::10252                :::*                    LISTEN      2910/kube-controlle
tcp6       0      0 :::10257                :::*                    LISTEN      2910/kube-controlle
tcp6       0      0 :::10259                :::*                    LISTEN      4128/kube-scheduler
系统日志验证
# 查看系统日志是否有组件的错误日志
$ journalctl -f

打印:

[root@homaybd03 ~]# journalctl -f
-- Logs begin at Mon 2022-05-16 07:50:25 CST. --
Jun 04 14:12:27 homaybd03 su[29164]: pam_unix(su-l:session): session opened for user ambari-qa by (uid=0)
Jun 04 14:12:28 homaybd03 su[29164]: pam_unix(su-l:session): session closed for user ambari-qa
Jun 04 14:12:28 homaybd03 systemd[1]: Removed slice User Slice of ambari-qa.
Jun 04 14:12:31 homaybd03 CommAmqpListener[29223]: Initializing CommAmqpListener
Jun 04 14:12:33 homaybd03 kube-apiserver[28103]: I0604 14:12:33.675174   28103 client.go:360] parsed scheme: "passthrough"
Jun 04 14:12:33 homaybd03 kube-apiserver[28103]: I0604 14:12:33.675227   28103 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.1.125:2379  <nil> 0 <nil>}] <nil> <nil>}
Jun 04 14:12:33 homaybd03 kube-apiserver[28103]: I0604 14:12:33.675237   28103 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 04 14:12:36 homaybd03 kube-apiserver[28103]: I0604 14:12:36.218127   28103 client.go:360] parsed scheme: "passthrough"
Jun 04 14:12:36 homaybd03 kube-apiserver[28103]: I0604 14:12:36.218180   28103 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.1.124:2379  <nil> 0 <nil>}] <nil> <nil>}
Jun 04 14:12:36 homaybd03 kube-apiserver[28103]: I0604 14:12:36.218192   28103 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 04 14:13:06 homaybd03 kube-apiserver[28103]: I0604 14:13:06.743705   28103 client.go:360] parsed scheme: "passthrough"
Jun 04 14:13:06 homaybd03 kube-apiserver[28103]: I0604 14:13:06.744515   28103 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.1.125:2379  <nil> 0 <nil>}] <nil> <nil>}
Jun 04 14:13:06 homaybd03 kube-apiserver[28103]: I0604 14:13:06.744527   28103 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 04 14:13:07 homaybd03 kube-apiserver[28103]: I0604 14:13:07.767893   28103 client.go:360] parsed scheme: "passthrough"
Jun 04 14:13:07 homaybd03 kube-apiserver[28103]: I0604 14:13:07.767943   28103 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.1.123:2379  <nil> 0 <nil>}] <nil> <nil>}
Jun 04 14:13:07 homaybd03 kube-apiserver[28103]: I0604 14:13:07.767966   28103 clientconn.go:948] ClientConn switching balancer to "pick_first"

6. 配置kubectl

kubectl是用来管理kubernetes集群的客户端工具,前面我们已经下载到了所有的master节点。下面我们来配置这个工具,让它可以使用。

在 node-1、node-2 节点执行以下命令:

# 创建kubectl的配置目录
$ mkdir ~/.kube/
# 把管理员的配置文件移动到kubectl的默认目录
$ mv ~/admin.kubeconfig ~/.kube/config
# 测试
$ kubectl get nodes

在执行 kubectl exec、run、logs 等命令时,apiserver 会转发到 kubelet。这里定义 RBAC 规则,授权 apiserver 调用 kubelet API。

$ kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

为者常成,行者常至