基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)
生活随笔
收集整理的這篇文章主要介紹了
基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
一、環境準備
1、系統要求
按量付費阿里云主機三臺
要求:centos7.6~7.8;以下為 https://kuboard.cn/install/install-k8s.html#%E6%A3%80%E6%9F%A5-centos-hostname 網站的檢驗結果。
| 7.8 | 😄 | 已驗證 |
| 7.7 | 😄 | 已驗證 |
| 7.6 | 😄 | 已驗證 |
| 7.5 | 😞 | 已證實會出現 kubelet 無法啟動的問題 |
| 7.4 | 😞 | 已證實會出現 kubelet 無法啟動的問題 |
| 7.3 | 😞 | 已證實會出現 kubelet 無法啟動的問題 |
| 7.2 | 😞 | 已證實會出現 kubelet 無法啟動的問題 |
2、前置步驟(所有節點)
- centos 版本為 7.6 或 7.7、CPU 內核數量大于等于 2,且內存大于等于 4G
- hostname 不是 localhost,且不包含下劃線、小數點、大寫字母
- 任意節點都有固定的內網 IP 地址(集群機器統一內網)
- 任意節點上 IP 地址 可互通(無需 NAT 映射即可相互訪問),且沒有防火墻、安全組隔離
- 任意節點不會直接使用 docker run 或 docker-compose 運行容器。Pod
二、安裝Docker環境(所有節點)
#1、安裝docker ##1.1、卸載舊版本 sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine##1.2、安裝基礎依賴 yum install -y yum-utils \ device-mapper-persistent-data \ lvm2##1.3、配置docker yum源 sudo yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo##1.4、安裝并啟動 docker yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 containerd.io systemctl enable docker systemctl start docker##1.5、配置docker加速 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://t1gbabbr.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker三、安裝k8s環境
1、安裝k8s、kubelet、kubeadm、kubectl(所有節點)
# 配置K8S的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# 卸載舊版本 yum remove -y kubelet kubeadm kubectl# 安裝kubelet、kubeadm、kubectl yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3#開機啟動和重啟kubelet systemctl enable kubelet && systemctl start kubelet ##注意,如果此時查看kubelet的狀態,他會無限重啟,等待接收集群命令,和初始化。這個是正常的。2、初始化master節點(master節點)
#1、下載master節點需要的鏡像【選做】 #創建一個.sh文件,內容如下, #!/bin/bash images=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1 ) for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done#2、初始化master節點 kubeadm init \ --apiserver-advertise-address=172.26.165.243 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16#service網絡和pod網絡;docker service create #docker container --> ip brigde #Pod ---> ip 地址,整個集群 Pod 是可以互通。255*255 #service ---> #3、配置 kubectl mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config#4、提前保存令牌 kubeadm join 172.26.165.243:6443 --token afb6st.b7jz45ze7zpg65ii \--discovery-token-ca-cert-hash sha256:e5e5854508dafd04f0e9cf1f502b5165e25ff3017afd23cade0fe6acb5bc14ab#5、部署網絡插件 #上傳網絡插件,并部署 #kubectl apply -f calico-3.13.1.yaml kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml#網絡好的時候,就沒有下面的操作了 calico: image: calico/cni:v3.14.0 image: calico/cni:v3.14.0 image: calico/pod2daemon-flexvol:v3.14.0 image: calico/node:v3.14.0 image: calico/kube-controllers:v3.14.0#6、查看狀態,等待就緒 watch kubectl get pod -n kube-system -o wide3、worker加入集群
#1、使用剛才master打印的令牌命令加入 kubeadm join 172.26.248.150:6443 --token ktnvuj.tgldo613ejg5a3x4 \--discovery-token-ca-cert-hash sha256:f66c496cf7eb8aa06e1a7cdb9b6be5b013c613cdcf5d1bbd88a6ea19a2b454ec #2、如果超過2小時忘記了令牌,可以這樣做 kubeadm token create --print-join-command #打印新令牌 kubeadm token create --ttl 0 --print-join-command #創建個永不過期的令牌4、搭建NFS作為默認sc
4.1、配置NFS服務器
yum install -y nfs-utils #執行命令 vi /etc/exports,創建 exports 文件,文件內容如下: echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports #/nfs/data 172.26.248.0/20(rw,no_root_squash) #執行以下命令,啟動 nfs 服務 # 創建共享目錄 mkdir -p /nfs/data systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server exportfs -r #檢查配置是否生效 exportfs # 輸出結果如下所示 /nfs/data /nfs/data #測試Pod直接掛載NFS了 apiVersion: v1 kind: Pod metadata:name: vol-nfsnamespace: default spec:volumes:- name: htmlnfs:path: /nfs/data #1000Gserver: 自己的nfs服務器地址containers:- name: myappimage: nginxvolumeMounts:- name: htmlmountPath: /usr/share/nginx/html/4.2、搭建NFS-Client
#服務器端防火墻開放111、662、875、892、2049的 tcp / udp 允許,否則遠端客戶無法連接。 #安裝客戶端工具 yum install -y nfs-utils#執行以下命令檢查 nfs 服務器端是否有設置共享目錄 # showmount -e $(nfs服務器的IP) showmount -e 172.26.165.243 # 輸出結果如下所示 Export list for 172.26.165.243 /nfs/data *#執行以下命令掛載 nfs 服務器上的共享目錄到本機路徑 /root/nfsmount mkdir /root/nfsmount # mount -t nfs $(nfs服務器的IP):/root/nfs_root /root/nfsmount #高可用備份的方式 mount -t nfs 172.26.165.243:/nfs/data /root/nfsmount # 寫入一個測試文件 echo "hello nfs server" > /root/nfsmount/test.txt#在 nfs 服務器上執行以下命令,驗證文件寫入成功 cat /data/volumes/test.txt4.3、設置動態供應
4.3.1、創建provisioner(NFS環境前面已經搭好)
| 名稱 | nfs-storage | 自定義存儲類名稱 |
| NFS Server | 172.26.165.243 | NFS服務的IP地址 |
| NFS Path | /nfs/data | NFS服務所共享的路徑 |
4.3.2、創建存儲類
#創建storageclass # vi storageclass-nfs.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: storage-nfs provisioner: storage.pri/nfs reclaimPolicy: Delete"reclaim policy"有三種方式:Retain、Recycle、Deleted。
-
Retain
-
- 保護被PVC釋放的PV及其上數據,并將PV狀態改成"released",不將被其它PVC綁定。集群管理員手動通過如下步驟釋放存儲資源
-
-
- 手動刪除PV,但與其相關的后端存儲資源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
- 手動清空后端存儲volume上的數據。
- 手動刪除后端存儲volume,或者重復使用后端volume,為其創建新的PV。
-
-
Delete
-
- 刪除被PVC釋放的PV及其后端存儲volume。對于動態PV其"reclaim policy"繼承自其"storage class",
- 默認是Delete。集群管理員負責將"storage class"的"reclaim policy"設置成用戶期望的形式,否則需要用戶手動為創建后的動態PV編輯"reclaim policy"
-
Recycle
-
- 保留PV,但清空其上數據,已廢棄
4.3.3、改變默認sc
##改變系統默認sc https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-classkubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'4.4、驗證nfs動態供應
4.4.1、創建pvc
#vi pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: pvc-claim-01# annotations:# volume.beta.kubernetes.io/storage-class: "storage-nfs" spec:storageClassName: storage-nfs #這個class一定注意要和sc的名字一樣accessModes:- ReadWriteManyresources:requests:storage: 1Mi4.4.2、使用pvc
#vi testpod.yaml kind: Pod apiVersion: v1 metadata:name: test-pod spec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: pvc-claim-015、安裝metrics-server
#1、先安裝metrics-server(yaml如下,已經改好了鏡像和配置,可以直接使用),這樣就能監控到pod。node的資源情況(默認只有cpu、memory的資源審計信息喲,更專業的我們后面對接 Prometheus) --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: system:aggregated-metrics-readerlabels:rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: metrics-server:system:auth-delegator roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:name: metrics-server-auth-readernamespace: kube-system roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata:name: v1beta1.metrics.k8s.io spec:service:name: metrics-servernamespace: kube-systemgroup: metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100 --- apiVersion: v1 kind: ServiceAccount metadata:name: metrics-servernamespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-server spec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: mirrorgooglecontainers/metrics-server-amd64:v0.3.6imagePullPolicy: IfNotPresentargs:- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- name: tmp-dirmountPath: /tmpnodeSelector:kubernetes.io/os: linuxkubernetes.io/arch: "amd64" --- apiVersion: v1 kind: Service metadata:name: metrics-servernamespace: kube-systemlabels:kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true" spec:selector:k8s-app: metrics-serverports:- port: 443protocol: TCPtargetPort: main-port --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: system:metrics-server rules: - apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: system:metrics-server roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system參考鏈接:
https://www.yuque.com/leifengyang/kubesphere/grw8se
總結
以上是生活随笔為你收集整理的基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: k8s pod健康检查(存活、就绪检查)
- 下一篇: deepin关闭ACPI