云原生|kubernetes|minikube的部署安装完全手册(修订版)
前言:
學習一個新平臺首先當然是能夠有這么一個平臺了,而kubernetes的部署安裝無疑是提高了這一學習的門檻,不管是二進制安裝還是kubeadm安裝都還是需要比較多的運維技巧的,并且在搭建學習的時候,需要的硬件資源也是比較多的,至少都需要三臺或者三臺以上的服務器才能夠完成部署安裝。
那么,kind或者minikube這樣的工具就是一個能夠快速的搭建出學習平臺的工具,特點是簡單,易用,一臺服務器的資源就可以搞定了(只是單機需要的內存大一些,建議至少8G內存吧),自動化程度高,基本什么都給你設置好了,并且支持多種虛擬化引擎,比如,docker,container,kvm這些常用的虛擬化引擎都支持。缺點是基本沒有定制化。
minikube支持的虛擬化引擎:
?好了,本教程大部分資料都是從官網的docs里扒的,docs的網址是:Welcome! | minikube
相關安裝部署文件(conntrack.tar.gz解壓后,rpm -ivh * 安裝就可以了,是相關依賴,minikube-images.tar.gz是鏡像包,解壓后倒入docker,三個可執行文件放入/root/.minikube/cache/linux/amd64/v1.18.8/目錄下即可。):
鏈接:https://pan.baidu.com/s/14-r59VfpZRpfiVGj4IadxA?pwd=k8ss?
提取碼:k8ss?
?
一,
get started minikube(開始部署minkube)
安裝部署前的先決條件:
至少兩個CPU,2G內存,20G空余磁盤空間,可訪問互聯網,有一個虛擬化引擎,?Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation其中的一個,那么,docker是比較容易安裝的,就不說了,docker吧,操作系統是centos。
What you’ll need 2 CPUs or more 2GB of free memory 20GB of free disk space Internet connection Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstationdocker的離線安裝以及本地化配置_zsk_john的博客-CSDN博客?離線安裝docker環境的博文,照此做就可以了,請確保docker環境是安裝好的。
docker版本至少是18.09到20.10
二,
開始安裝
下載minikube的執行程序
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube三,
導入鏡像
由于minikube在安裝kubernetes的時候使用的鏡像是從外網拉取的,國內由于被墻是無法拉取的,因此,制作了這個離線鏡像包。
[root@slave3 ~]# tar zxf minikube-images.tar.gz [root@slave3 ~]# cd minikube-images [root@slave3 minikube-images]# for i in `ls ./*`;do docker load <$i;done dfccba63d0cc: Loading layer [==================================================>] 80.82MB/80.82MB Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 225df95e717c: Loading layer [==================================================>] 336.4kB/336.4kB c965b38a6629: Loading layer [==================================================>] 43.58MB/43.58MB 。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。略略略四,
初始化kubernetes集群的命令:
這里大概介紹一下,image-repostory是使用阿里云下載鏡像,cni是指定網絡插件就用flannel,如果不想用這個刪掉這行就可以了,其它沒什么要注意的。
minikube config set driver none minikube start pod-network-cidr='10.244.0.0/16'\--extra-config=kubelet.pod-cidr=10.244.0.0/16 \--network-plugin=cni \--image-repository='registry.aliyuncs.com/google_containers' \--cni=flannel \--apiserver-ips=192.168.217.23 \--kubernetes-version=1.18.8 \--vm-driver=none啟動集群的日志
[root@slave3 conntrack]# minikube start --driver=none --kubernetes-version=1.18.8 * minikube v1.26.1 on Centos 7.4.1708 * Using the none driver based on user configuration * Starting control plane node minikube in cluster minikube * Running on localhost (CPUs=4, Memory=7983MB, Disk=51175MB) ... * OS release is CentOS Linux 7 (Core) E0911 11:23:25.121495 14039 docker.go:148] "Failed to enable" err=<sudo systemctl enable docker.socket: exit status 1stdout:stderr:Failed to execute operation: No such file or directory> service="docker.socket" ! This bare metal machine is having trouble accessing https://k8s.gcr.io * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/> kubectl.sha256: 65 B / 65 B [-------------------------] 100.00% ? p/s 0s> kubelet: 108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s - Generating certificates and keys ...- Booting up control plane ... ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1 stdout: [init] Using Kubernetes version: v1.18.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'stderr: W0911 11:26:38.783101 14450 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING Swap]: running with swap on is not supported. Please disable swap[WARNING FileExisting-socat]: socat not found in system path[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0911 11:26:48.464749 14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" W0911 11:26:48.466754 14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher- Generating certificates and keys ...- Booting up control plane ...- Configuring RBAC rules ... * Configuring local host environment ... * ! The 'none' driver is designed for experts who need to integrate with an existing VM * Most users should use the newer 'docker' driver instead, which does not require root! * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ * ! kubectl and minikube configuration will be stored in /root ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: * - sudo mv /root/.kube /root/.minikube $HOME- sudo chown -R $USER $HOME/.kube $HOME/.minikube * * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true * Verifying Kubernetes components...- Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by defaultminikube的停止和刪除:?
如果這個集群想停止的話,那么命令就非常簡單了:
minkube stop 輸出如下; * Stopping "minikube" in none ... * Node "minikube" stopped.如果重啟了服務器,那么,只需要參數換成start就可以再次啟動minikube了。刪除minkube也非常簡單,參數換成 delete即可,這個刪除會將配置文件什么的都給刪除掉,前提這些文件是minikube自己建立的,否則不會刪除。
start的輸出:
[root@node3 manifests]# minikube start * minikube v1.12.0 on Centos 7.4.1708 * Using the none driver based on existing profile * Starting control plane node minikube in cluster minikube * Restarting existing none bare metal machine for "minikube" ... * OS release is CentOS Linux 7 (Core) * Preparing Kubernetes v1.18.8 on Docker 19.03.9 ... * Configuring local host environment ... * ! The 'none' driver is designed for experts who need to integrate with an existing VM * Most users should use the newer 'docker' driver instead, which does not require root! * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ * ! kubectl and minikube configuration will be stored in /root ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: * - sudo mv /root/.kube /root/.minikube $HOME- sudo chown -R $USER $HOME/.kube $HOME/.minikube * * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true * Verifying Kubernetes components... * Enabled addons: default-storageclass, storage-provisioner * Done! kubectl is now configured to use "minikube"?
以上的輸出表明kubernetes單節點集群已經安裝成功了,但有一些警告需要處理:
(1)
關于kubeadmin,kubelet,kubectl這三個命令的緩存:
> kubectl.sha256: 65 B / 65 B [-------------------------] 100.00% ? p/s 0s> kubelet: 108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s這幾個命令是下載到/root/.minikube/cache/linux/amd64/v1.18.8/這個目錄下的,因此,想要提高速度,離線化部署就需要這么做:
建立以上的目錄:
mkdir -p /root/.minikube/cache/linux/amd64/v1.18.8/給文件賦予權限并拷貝文件到這個目錄下:
chmod a+x kube* #賦予權限[root@node3 v1.18.8]# pwd /root/.minikube/cache/linux/amd64/v1.18.8[root@slave3 v1.18.8]# ll total 192544 -rwxr-xr-x 1 root root 39821312 Sep 11 11:24 kubeadm -rwxr-xr-x 1 root root 44040192 Sep 11 11:24 kubectl -rwxr-xr-x 1 root root 113300248 Sep 11 11:26 kubelet(2)
集群健康檢查報錯的解決方案:
[root@slave3 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused etcd-0 Healthy {"health":"true"}解決方案:
刪除/etc/kubernetes/manifests/kube-scheduler.yaml和/etc/kubernetes/manifests/kube-controller-manager.yaml兩個文件內的--port=0 這個字段稍等片刻,再次查詢就正常了:
[root@slave3 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}三,
dashboard的安裝
[root@slave3 ~]# minikube dashboard * Enabling dashboard ...- Using image kubernetesui/metrics-scraper:v1.0.8- Using image kubernetesui/dashboard:v2.6.0 * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/設置代理:
[root@slave3 v1.18.8]# kubectl proxy --port=45396 --address='0.0.0.0' --disable-filter=true --accept-hosts='^.*' W0911 12:49:38.664081 8709 proxy.go:167] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on [::]:45396瀏覽器登錄網址:
本機IP是192.168.217.11,和上面的http://127.0.0.1:32844/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
拼接就好了
至此,minikube就安裝完了。
附錄:
關于addons
可以看到有安裝StorageClass,但很多addons還沒有安裝
[root@slave3 v1.18.8]# minikube addons list |-----------------------------|----------|--------------|--------------------------------| | ADDON NAME | PROFILE | STATUS | MAINTAINER | |-----------------------------|----------|--------------|--------------------------------| | ambassador | minikube | disabled | 3rd party (Ambassador) | | auto-pause | minikube | disabled | Google | | csi-hostpath-driver | minikube | disabled | Kubernetes | | dashboard | minikube | enabled ? | Kubernetes | | default-storageclass | minikube | enabled ? | Kubernetes | | efk | minikube | disabled | 3rd party (Elastic) | | freshpod | minikube | disabled | Google | | gcp-auth | minikube | disabled | Google | | gvisor | minikube | disabled | Google | | headlamp | minikube | disabled | 3rd party (kinvolk.io) | | helm-tiller | minikube | disabled | 3rd party (Helm) | | inaccel | minikube | disabled | 3rd party (InAccel | | | | | [info@inaccel.com]) | | ingress | minikube | disabled | Kubernetes | | ingress-dns | minikube | disabled | Google | | istio | minikube | disabled | 3rd party (Istio) | | istio-provisioner | minikube | disabled | 3rd party (Istio) | | kong | minikube | disabled | 3rd party (Kong HQ) | | kubevirt | minikube | disabled | 3rd party (KubeVirt) | | logviewer | minikube | disabled | 3rd party (unknown) | | metallb | minikube | disabled | 3rd party (MetalLB) | | metrics-server | minikube | disabled | Kubernetes | | nvidia-driver-installer | minikube | disabled | Google | | nvidia-gpu-device-plugin | minikube | disabled | 3rd party (Nvidia) | | olm | minikube | disabled | 3rd party (Operator Framework) | | pod-security-policy | minikube | disabled | 3rd party (unknown) | | portainer | minikube | disabled | 3rd party (Portainer.io) | | registry | minikube | disabled | Google | | registry-aliases | minikube | disabled | 3rd party (unknown) | | registry-creds | minikube | disabled | 3rd party (UPMC Enterprises) | | storage-provisioner | minikube | enabled ? | Google | | storage-provisioner-gluster | minikube | disabled | 3rd party (Gluster) | | volumesnapshots | minikube | disabled | Kubernetes | |-----------------------------|----------|--------------|--------------------------------|?以安裝ingress為例(安裝的同時,輸出安裝的錯誤日志):
[root@slave3 v1.18.8]# minikube addons enable ingress --alsologtostderr I0911 13:09:08.559523 14428 out.go:296] Setting OutFile to fd 1 ... I0911 13:09:08.572541 14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0911 13:09:08.572593 14428 out.go:309] Setting ErrFile to fd 2... I0911 13:09:08.572609 14428 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0911 13:09:08.572908 14428 root.go:333] Updating PATH: /root/.minikube/bin I0911 13:09:08.577988 14428 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub. You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS I0911 13:09:08.580137 14428 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.18.8 I0911 13:09:08.580198 14428 addons.go:65] Setting ingress=true in profile "minikube" I0911 13:09:08.580243 14428 addons.go:153] Setting addon ingress=true in "minikube" I0911 13:09:08.580572 14428 host.go:66] Checking if "minikube" exists ... I0911 13:09:08.581080 14428 exec_runner.go:51] Run: systemctl --version I0911 13:09:08.584877 14428 kubeconfig.go:92] found "minikube" server: "https://192.168.217.136:8443" I0911 13:09:08.584942 14428 api_server.go:165] Checking apiserver status ... I0911 13:09:08.584982 14428 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0911 13:09:08.611630 14428 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15576/cgroup I0911 13:09:08.626851 14428 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124" I0911 13:09:08.626952 14428 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1a4a24f29bac3cef528a8b328b9798b5/c8a589a612154591de984664d86a3ad96a449f3d0b1145527ceea9c5ed267124/freezer.state I0911 13:09:08.638188 14428 api_server.go:203] freezer state: "THAWED" I0911 13:09:08.638329 14428 api_server.go:240] Checking apiserver healthz at https://192.168.217.136:8443/healthz ... I0911 13:09:08.649018 14428 api_server.go:266] https://192.168.217.136:8443/healthz returned 200: ok I0911 13:09:08.650082 14428 out.go:177] - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3- Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3 I0911 13:09:08.652268 14428 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 I0911 13:09:08.653129 14428 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 I0911 13:09:08.654440 14428 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml I0911 13:09:08.654528 14428 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes) I0911 13:09:08.654720 14428 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4099945938 /etc/kubernetes/addons/ingress-deploy.yaml I0911 13:09:08.668351 14428 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml I0911 13:09:09.748481 14428 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.080019138s) I0911 13:09:09.748552 14428 addons.go:383] Verifying addon ingress=true in "minikube" I0911 13:09:09.751805 14428 out.go:177] * Verifying ingress addon...可以看到,安裝的時候使用的資源清單文件是這個:
sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.8/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml該文件內容非常多,但,由于是使用的國外的鏡像網址,因此,一般是不會安裝成功的。
解決方案為查找里面涉及的images,替換為國內可下載的鏡像即可。?
還有一個權限問題,可能會報錯:
F0911 05:24:52.171825 6 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied解決方案是:
還是編輯下面這個文件,?runAsUser 的值修改為33?
重新apply 此文件:
kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
[root@slave3 v1.18.8]# cat /etc/kubernetes/addons/ingress-deploy.yaml # Copyright 2021 The Kubernetes Authors All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# ref: https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/kind/deploy.yamlapiVersion: v1 kind: Namespace metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx --- apiVersion: v1 automountServiceAccountToken: true kind: ServiceAccount metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginxnamespace: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginxnamespace: ingress-nginx rules: - apiGroups:- ""resources:- namespacesverbs:- get - apiGroups:- ""resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch - apiGroups:- ""resources:- servicesverbs:- get- list- watch - apiGroups:- extensions- networking.k8s.ioresources:- ingressesverbs:- get- list- watch - apiGroups:- extensions- networking.k8s.ioresources:- ingresses/statusverbs:- update - apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch - apiGroups:- ""resourceNames:- ingress-controller-leaderresources:- configmapsverbs:- get- update - apiGroups:- ""resources:- configmapsverbs:- create - apiGroups:- ""resources:- eventsverbs:- create- patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admissionnamespace: ingress-nginx rules: - apiGroups:- ""resources:- secretsverbs:- get- create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx rules: - apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch - apiGroups:- ""resources:- nodesverbs:- get - apiGroups:- ""resources:- servicesverbs:- get- list- watch - apiGroups:- extensions- networking.k8s.ioresources:- ingressesverbs:- get- list- watch - apiGroups:- ""resources:- eventsverbs:- create- patch - apiGroups:- extensions- networking.k8s.ioresources:- ingresses/statusverbs:- update - apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission rules: - apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginxnamespace: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx subjects: - kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admissionnamespace: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission subjects: - kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx subjects: - kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission subjects: - kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: v1 data:# see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md for all possible options and their descriptionhsts: "false"# see https://github.com/kubernetes/minikube/pull/12702#discussion_r727519180: 'allow-snippet-annotations' should be used only if strictly required by another part of the deployment # allow-snippet-annotations: "true" kind: ConfigMap metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-controllernamespace: ingress-nginx --- apiVersion: v1 kind: ConfigMap metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller --- apiVersion: v1 kind: ConfigMap metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller --- apiVersion: v1 kind: Service metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-controllernamespace: ingress-nginx spec:ports:- name: httpport: 80protocol: TCPtargetPort: http- name: httpsport: 443protocol: TCPtargetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: NodePort --- apiVersion: v1 kind: Service metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-controller-admissionnamespace: ingress-nginx spec:ports:- name: https-webhookport: 443targetPort: webhookselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-controllernamespace: ingress-nginx spec:minReadySeconds: 0revisionHistoryLimit: 10selector:matchLabels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxstrategy:rollingUpdate:maxUnavailable: 1type: RollingUpdatetemplate:metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxgcp-auth-skip-secret: "true"spec:containers:- args:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --ingress-class=nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keyenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.soimage: k8s.gcr.io/ingress-nginx/controller:v0.49.3@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownlivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1name: controllerports:- containerPort: 80hostPort: 80name: httpprotocol: TCP- containerPort: 443hostPort: 443name: httpsprotocol: TCP- containerPort: 8443name: webhookprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1resources:requests:cpu: 100mmemory: 90MisecurityContext:allowPrivilegeEscalation: truecapabilities:add:- NET_BIND_SERVICEdrop:- ALLrunAsUser: 101volumeMounts:- mountPath: /usr/local/certificates/name: webhook-certreadOnly: truednsPolicy: ClusterFirstnodeSelector:minikube.k8s.io/primary: "true"kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 0tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Equalvolumes:- name: webhook-certsecret:secretName: ingress-nginx-admission --- apiVersion: batch/v1 kind: Job metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission-createnamespace: ingress-nginx spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission-createspec:containers:- args:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7imagePullPolicy: IfNotPresentname: createsecurityContext:allowPrivilegeEscalation: falsenodeSelector:minikube.k8s.io/primary: "true"kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission --- apiVersion: batch/v1 kind: Job metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission-patchnamespace: ingress-nginx spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission-patchspec:containers:- args:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7imagePullPolicy: IfNotPresentname: patchsecurityContext:allowPrivilegeEscalation: falsenodeSelector:minikube.k8s.io/primary: "true"kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission --- apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx-admission webhooks: - admissionReviewVersions:- v1- v1beta1clientConfig:service:name: ingress-nginx-controller-admissionnamespace: ingress-nginxpath: /networking/v1beta1/ingressesfailurePolicy: FailmatchPolicy: Equivalentname: validate.nginx.ingress.kubernetes.iorules:- apiGroups:- networking.k8s.ioapiVersions:- v1beta1operations:- CREATE- UPDATEresources:- ingressessideEffects: None安裝完畢后可以看到:
[root@slave3 v1.18.8]# kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create-n5hc5 0/1 Completed 0 28m pod/ingress-nginx-admission-patch-cgzl9 0/1 Completed 0 28m pod/ingress-nginx-controller-54b856d6d7-7fr7q 1/1 Running 0 9m54sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.107.186.74 <none> 80:31411/TCP,443:32683/TCP 28m service/ingress-nginx-controller-admission ClusterIP 10.106.184.40 <none> 443/TCP 28mNAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 28mNAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-54b856d6d7 1 1 1 9m54s replicaset.apps/ingress-nginx-controller-7689b8b4f9 0 0 0 17m replicaset.apps/ingress-nginx-controller-77cc874b76 0 0 0 28mNAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 21s 28m job.batch/ingress-nginx-admission-patch 1/1 22s 28m [root@slave3 v1.18.8]#addons里的ingress就安裝好啦。
總結
以上是生活随笔為你收集整理的云原生|kubernetes|minikube的部署安装完全手册(修订版)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 用Excel制作贪吃蛇
- 下一篇: 初中学考英语听说计算机考试,2018中考