更新node_kubernetes证书更新
適用環境:
kubeadm部署的k8s集群,默認證書位置為/etc/kubernetes/pki
如果環境中證書目錄為非pki(以ssl為例),則需創建對應軟連接。
本文以高可用集群為例(3 master)
master節點:
2. 更新過期證書(/etc/kubernetes) (master1 節點)
創建軟連接pki -> ssl : ln -s ssl/ pki (如pki存在,可略過)kubeadm alpha certs renew apiserver kubeadm alpha certs renew apiserver-kubelet-client kubeadm alpha certs renew front-proxy-client3. 更新kubeconfig(/etc/kubernetes)(master1 節點)
需更新admin.conf / scheduler.conf / controller-manager.conf / kubelet.conf
kubeadm alpha certs renew admin.conf kubeadm alpha certs renew controller-manager.conf kubeadm alpha certs renew scheduler.conf# 以下命令中以master1為例,請根據集群實際節點名稱替換。 kubeadm alpha kubeconfig user --client-name=system:node:master1 --org=system:nodes > kubelet.conf4. 如上述kubeconfig中apiserver地址非lb地址,則修改為lb地址:(master1 節點)
https://192.168.0.13:6443 -> https://{ lb domain or ip }:6443
5. 重啟k8s master組件:(master1 節點)
docker ps -af name=k8s_kube-apiserver* -q | xargs --no-run-if-empty docker rm -f docker ps -af name=k8s_kube-scheduler* -q | xargs --no-run-if-empty docker rm -f docker ps -af name=k8s_kube-controller-manager* -q | xargs --no-run-if-empty docker rm -f systemctl restart kubelet6. 驗證kubeconfig有效性及查看節點狀態 (master1 節點)
kubectl get node –kubeconfig admin.conf kubectl get node –kubeconfig scheduler.conf kubectl get node –kubeconfig controller-manager.conf kubectl get node –kubeconfig kubelet.conf7. 同步master1證書/etc/kubernetes/ssl至master2、master3的對應路徑中/etc/kubernetes/ssl(同步前建議備份舊證書)
證書路徑:/etc/kubernetes/ssl
8. 更新kubeconfig(/etc/kubernetes)(master2, master3)
kubeadm alpha certs renew admin.conf kubeadm alpha certs renew controller-manager.conf kubeadm alpha certs renew scheduler.conf# 以下命令中以master2、master3為例,請根據集群實際節點名稱替換。 kubeadm alpha kubeconfig user --client-name=system:node:master2 --org=system:nodes > kubelet.conf (master2) kubeadm alpha kubeconfig user --client-name=system:node:master3 --org=system:nodes > kubelet.conf (master3)9. 如上述kubeconfig中apiserver地址非lb地址,則修改為lb地址:(master2、master3)
https://192.168.0.13:6443 -> https://{ lb domain or ip }:6443
注:涉及文件:admin.conf、controller-manager.conf、scheduler.conf、kubelet.conf
10. 重啟master2、master3中對應master組件
docker ps -af name=k8s_kube-apiserver* -q | xargs --no-run-if-empty docker rm -f docker ps -af name=k8s_kube-scheduler* -q | xargs --no-run-if-empty docker rm -f docker ps -af name=k8s_kube-controller-manager* -q | xargs --no-run-if-empty docker rm -f systemctl restart kubelet11. 驗證kubeconfig有效性 (master2、master3)
kubectl get node –kubeconfig admin.conf kubectl get node –kubeconfig scheduler.conf kubectl get node –kubeconfig controller-manager.conf kubectl get node –kubeconfig kubelet.conf12. 更新~/.kube/config (master1、master2、master3)
cp admin.conf ~/.kube/config注:如node節點也需使用kubectl,將master1上的~/.kube/config拷貝至對應node節點~/.kube/config
13. 驗證~/.kube/config有效性:
kubectl get node 查看集群狀態worker節點:(node節點逐個操作,若kubelet已經配置了證書自動更新,則可略過該步驟)
1. kubeadm token list 查看輸出若為空或顯示日期過期,則需重新生成。
2. kubeadm token create 重新生成token
3. 記錄token值
4. 替換node節點/etc/kubernetes/ bootstrap-kubelet.conf中token (所有node節點)
5. 刪除/etc/kubernetes/kubelet.conf (所有node節點)
rm -rf /etc/kubernetes/kubelet.conf6. 重啟kubelet (所有node節點)
systemctl restart kubelet7. 查看節點狀態:
kubectl get node 驗證集群狀態總結
以上是生活随笔為你收集整理的更新node_kubernetes证书更新的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: java 判断是linux系统_java
- 下一篇: plsql最多可以存多少_银行内部透露:
