使用国内的镜像源搭建 kubernetes(k8s)集群
老話說的好:努力學(xué)習(xí),提高自己,讓自己知道的比別人多,了解的別人多。
言歸正傳,之前我們聊了 Docker,隨著業(yè)務(wù)的不斷擴(kuò)大,Docker 容器不斷增多,物理機(jī)也不斷增多,此時我們會發(fā)現(xiàn),登錄到每臺機(jī)器去手工操作 Docker 是一件很麻煩的事情。
這時,我們需要一個好用的工具來管理 Docker,幫我們創(chuàng)建、運(yùn)行、調(diào)整、銷毀這些容器,幫我們監(jiān)控哪個容器宕掉了,然后重新啟動這個容器等等。
kubernetes(k8s)就是一個很好的選擇,今天我們先來聊聊 kubernetes(k8s)是如何搭建的。
服務(wù)器A IP:192.168.1.12
服務(wù)器B IP:192.168.1.11
服務(wù)器C IP:192.168.1.15
服務(wù)器A hostname:zhuifengren2
服務(wù)器B hostname:zhuifengren3
服務(wù)器C hostname:zhuifengren4
準(zhǔn)備三臺服務(wù)器,CentOS7 操作系統(tǒng)。
三臺服務(wù)器都已安裝了 Docker,關(guān)于 Docker 的安裝可參見我的另一篇文章《Docker 急速入門》(Docker 急速入門_追風(fēng)人的博客-CSDN博客)
服務(wù)器A 作為 Master 節(jié)點,服務(wù)器B 和 服務(wù)器C 作為數(shù)據(jù)節(jié)點。
3.1 官網(wǎng)地址
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
3.2 服務(wù)器配置要求
內(nèi)存至少2G
CPU至少2核
硬盤至少20G
3.3 關(guān)閉 SELinux
方式一:
setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config
方式二:
vim /etc/sysconfig/selinux
SELINUX=enforcing 改為 SELINUX=disabled
重啟服務(wù)器
3.4 設(shè)置路由
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
 EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 EOF
sysctl --system
3.5 關(guān)閉系統(tǒng)的 Swap
swapoff -a
vi /etc/fstab
注釋掉 SWAP 的自動掛載
vi /etc/sysctl.d/k8s.conf
添加下面一行:
 vm.swappiness=0
sysctl -p /etc/sysctl.d/k8s.conf
3.6 安裝并啟動 kubernetes(K8s)
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
 name=Kubernetes
 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
 enabled=1
 gpgcheck=1
 repo_gpgcheck=1
 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 exclude=kube*
 EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet
systemctl restart kubelet
3.3 到 3.6 的步驟在3臺服務(wù)器上都執(zhí)行
4.1 修改 Docker 配置
cat > /etc/docker/daemon.json <<EOF
{
 “exec-opts”: [“native.cgroupdriver=systemd”],
 “l(fā)og-driver”: “json-file”,
 “l(fā)og-opts”: {
 “max-size”: “100m”
 },
 “storage-driver”: “overlay2”,
 “storage-opts”: [
 “overlay2.override_kernel_check=true”
 ],
 “data-root”: “/data/docker”
 }
 EOF
systemctl daemon-reload
systemctl restart docker
4.2 查看需要的鏡像
kubeadm config images listk8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
 k8s.gcr.io/kube-scheduler:v1.22.3
 k8s.gcr.io/kube-proxy:v1.22.3
 k8s.gcr.io/pause:3.5
 k8s.gcr.io/etcd:3.5.0-0
 k8s.gcr.io/coredns/coredns:v1.8.4
4.3 從國內(nèi)源拉取鏡像
由于 k8s.cgr.io 無法訪問,因此我們需要先使用國內(nèi)鏡像源拉下來,再改tag
執(zhí)行下面腳本:
 #/bin/bash
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
 docker pull quay.io/coreos/flannel:v0.15.1-amd64
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3 k8s.gcr.io/kube-apiserver:v1.22.3
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
4.1 4.2 4.3 的步驟在三臺服務(wù)器都需要執(zhí)行
4.4 初始化集群
在 Master 節(jié)點執(zhí)行
kubeadm init --apiserver-advertise-address=192.168.1.12 --pod-network-cidr=10.244.0.0/16
其中,192.168.1.12 是 Master 節(jié)點的 IP 地址,可根據(jù)實際情況修改。
4.5 Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused 報錯解決
初始化集群時,如果報以上的錯誤,在 Master 節(jié)點按以下步驟操作即可:
vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
增加:
 Environment=“KUBELET_SYSTEM_PODS_ARGS=–pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false”
cat > /var/lib/kubelet/config.yaml <<EOF
 apiVersion: kubelet.config.k8s.io/v1beta1
 kind: KubeletConfiguration
 cgroupDriver: systemd
 EOF
systemctl daemon-reload
systemctl restart kubelet
kubeadm reset -f
4.6 再次執(zhí)行集群初始化命令
在 Master 節(jié)點執(zhí)行
kubeadm init --apiserver-advertise-address=192.168.1.12 --pod-network-cidr=10.244.0.0/16
出現(xiàn)以下信息,說明初始化成功:
 Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown  ( i d ? u ) : (id -u): (id?u):(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
 Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.12:6443 --token x0u0ou.q6271pyjm7cv5hxl 
 –discovery-token-ca-cert-hash sha256:907ffb03d73f7668b96024c328880f95f4249e98da1be44d1caeb01dd62173da
4.7 根據(jù)上一步的信息 export config 文件 及設(shè)置網(wǎng)絡(luò)
export KUBECONFIG=/etc/kubernetes/admin.conf
這里我們使用 flannel 網(wǎng)絡(luò)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
到此為止,Master 節(jié)點搭建完畢。
4.8 根據(jù) 4.6 步驟的信息,將兩個數(shù)據(jù)節(jié)點加入集群
在 服務(wù)器B 和 服務(wù)器C,執(zhí)行如下命令(來源于 4.6 步驟的信息):
kubeadm join 192.168.1.12:6443 --token x0u0ou.q6271pyjm7cv5hxl \
–discovery-token-ca-cert-hash sha256:907ffb03d73f7668b96024c328880f95f4249e98da1be44d1caeb01dd62173da
如果執(zhí)行不成功,或者數(shù)據(jù)節(jié)點始終是 NotReady 狀態(tài),則參見 4.5 步驟,修改配置。
4.9 在 Master 節(jié)點,查看集群信息
kubectl get node
如果狀態(tài)都是 Ready,則 Kubernetes(K8s)集群搭建成功。
今天聊了一下 如何使用國內(nèi)的鏡像源搭建 kubernetes(k8s)集群 ,希望可以對大家的工作有所幫助。
v1.21.0
 #/bin/bash
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0
 docker pull quay.io/coreos/flannel:v0.15.1-amd64
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 k8s.gcr.io/kube-apiserver:v1.21.0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 k8s.gcr.io/kube-controller-manager:v1.21.0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 k8s.gcr.io/kube-scheduler:v1.21.0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 k8s.gcr.io/kube-proxy:v1.21.0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0
總結(jié)
以上是生活随笔為你收集整理的使用国内的镜像源搭建 kubernetes(k8s)集群的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: 牛客网 - 小乐乐打游戏(BFS)
- 下一篇: GET 请求能传图片吗?
