【谷粒商城】集群篇-k8s(4/4)
【谷粒商城】集群篇-k8s(4/4)
目錄
一、K8s快速入門
1)簡介
2)架構
3)概念
4)快速體驗
二、K8s集群安裝
1)kubeadm
2)前置要求
3)部署步驟
4)環境準備
5)所有節點安裝docker、kubeadm、kubelet、kubectl
6)部署k8s-master
7)安裝POD網絡插件(CNI)
8)加入kubenetes的Node節點
9)入門操作kubernetes集群
三、docker深入
四、K8s細節
1、kubectl文檔
2、資源類型
3、格式化輸出
命令參考
service的意義
Ingress
安裝kubernetes可視化界面——DashBoard
kubesphere
1、簡潔
2、安裝前提提交
?
版權
-
筆記-基礎篇-1(P1-P28):https://blog.csdn.net/hancoder/article/details/106922139
-
筆記-基礎篇-2(P28-P100):https://blog.csdn.net/hancoder/article/details/107612619
-
筆記-高級篇(P340):https://blog.csdn.net/hancoder/article/details/107612746
-
筆記-vue:https://blog.csdn.net/hancoder/article/details/107007605
-
筆記-elastic search、上架、檢索:https://blog.csdn.net/hancoder/article/details/113922398
-
筆記-認證服務:https://blog.csdn.net/hancoder/article/details/114242184
-
筆記-分布式鎖與緩存:https://blog.csdn.net/hancoder/article/details/114004280
-
筆記-集群篇:https://blog.csdn.net/hancoder/article/details/107612802
-
springcloud筆記:https://blog.csdn.net/hancoder/article/details/109063671
-
筆記版本說明:2020年提供過筆記文檔,但只有P1-P50的內容,2021年整理了P340的內容。請點擊標題下面分欄查看系列筆記
-
聲明:
- 可以白嫖,但請勿轉載發布,筆記手打不易
- 本系列筆記不斷迭代優化,csdn:hancoder上是最新版內容,10W字都是在csdn免費開放觀看的。
- 離線md筆記文件獲取方式見文末。2021-3版本的md筆記打完壓縮包共500k(云圖床),包括本項目筆記,還有cloud、docker、mybatis-plus、rabbitMQ等個人相關筆記
-
sql:https://github.com/FermHan/gulimall/sql文件
-
本項目其他筆記見專欄:https://blog.csdn.net/hancoder/category_10822407.html
一、K8s快速入門
1)簡介
kubernetes簡稱k8s。是用于自動部署,擴展和管理容器化應用程序的開源系統。
中文官網:https://kubernetes.io/Zh/
中文社區:https://www.kubernetes.org.cn/
官方文檔:https://kubernetes.io/zh/docs/home/
社區文檔:https://docs.kubernetes.org.cn/
部署方式的進化:
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-A2suTKTF-1620146330108)(https://d33wubrfki0l68.cloudfront.net/26a177ede4d7b032362289c6fccd448fc4a91174/eb693/images/docs/container_evolution.svg)]
時光回溯
讓我們回顧一下為什么 Kubernetes 如此有用。
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-jhiwhOpn-1620146330110)(https://d33wubrfki0l68.cloudfront.net/26a177ede4d7b032362289c6fccd448fc4a91174/eb693/images/docs/container_evolution.svg)]
傳統部署時代:
早期,各個組織機構在物理服務器上運行應用程序。無法為物理服務器中的應用程序定義資源邊界,這會導致資源分配問題。 例如,如果在物理服務器上運行多個應用程序,則可能會出現一個應用程序占用大部分資源的情況, 結果可能導致其他應用程序的性能下降。 一種解決方案是在不同的物理服務器上運行每個應用程序,但是由于資源利用不足而無法擴展, 并且維護許多物理服務器的成本很高。
虛擬化部署時代:
作為解決方案,引入了虛擬化。虛擬化技術允許你在單個物理服務器的 CPU 上運行多個虛擬機(VM)。 虛擬化允許應用程序在 VM 之間隔離,并提供一定程度的安全,因為一個應用程序的信息 不能被另一應用程序隨意訪問。
虛擬化技術能夠更好地利用物理服務器上的資源,并且因為可輕松地添加或更新應用程序 而可以實現更好的可伸縮性,降低硬件成本等等。
每個 VM 是一臺完整的計算機,在虛擬化硬件之上運行所有組件,包括其自己的操作系統。
容器部署時代:
容器類似于 VM,但是它們具有被放寬的隔離屬性,可以在應用程序之間共享操作系統(OS)。 因此,容器被認為是輕量級的。容器與 VM 類似,具有自己的文件系統、CPU、內存、進程空間等。 由于它們與基礎架構分離,因此可以跨云和 OS 發行版本進行移植。
容器因具有許多優勢而變得流行起來。下面列出的是容器的一些好處:
- 敏捷應用程序的創建和部署:與使用 VM 鏡像相比,提高了容器鏡像創建的簡便性和效率。
- 持續開發、集成和部署:通過快速簡單的回滾(由于鏡像不可變性),支持可靠且頻繁的 容器鏡像構建和部署。
- 關注開發與運維的分離:在構建/發布時而不是在部署時創建應用程序容器鏡像, 從而將應用程序與基礎架構分離。
- 可觀察性不僅可以顯示操作系統級別的信息和指標,還可以顯示應用程序的運行狀況和其他指標信號。
- 跨開發、測試和生產的環境一致性:在便攜式計算機上與在云中相同地運行。
- 跨云和操作系統發行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方運行。
- 以應用程序為中心的管理:提高抽象級別,從在虛擬硬件上運行 OS 到使用邏輯資源在 OS 上運行應用程序。
- 松散耦合、分布式、彈性、解放的微服務:應用程序被分解成較小的獨立部分, 并且可以動態部署和管理 - 而不是在一臺大型單機上整體運行。
- 資源隔離:可預測的應用程序性能。
- 資源利用:高效率和高密度。
2)架構
(1)整體主從方式
(2)master節點架構
(3)Node節點架構
3)概念
4)快速體驗
(1)安裝minikube
https://github.com/kubernetes/minikube/releases
下載minikuber-windows-amd64.exe 改名為minikube.exe
打開virtualBox,打開cmd
運行
minikube start --vm-driver=virtualbox --registry-mirror=https://registry.docker-cn.com
等待20分鐘即可。
(2)體驗nginx部署升級
提交一個nginx deployment
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
升級 nginx deployment
kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
擴容 nginx deployment
二、K8s集群安裝
1)kubeadm
kubeadm是官方社區推出的一個用于快速部署kuberneters集群的工具。
這個工具能通過兩條指令完成一個kuberneters集群的部署
創建一個master節點
$ kuberneters init- 1
將一個node節點加入到當前集群中
$ kubeadm join <Master節點的IP和端口>- 1
2)前置要求
一臺或多臺機器,操作系統Centos7.x-86_x64
硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多
集群中所有的機器之間網絡互通
可以訪問外網,需要拉取鏡像
禁止Swap分區
3)部署步驟
4)環境準備
(1)準備工作
- 我們可以使用vagrant快速創建三個虛擬機。虛擬機啟動前先設置virtualbox的主機網絡。現在全部統一為192.168.56.1,以后所有虛擬機都是56.x的ip地址。
- 在全局設定中,找到一個空間比較大的磁盤用用來存放鏡像。
網卡1是NAT,用于虛擬機與本機訪問互聯網。網卡2是僅主機網絡,虛擬機內部共享的虛擬網絡
(2)啟動三個虛擬機
- 使用我們提供的vagrant文件,復制到非中文無空格目錄下,運行vagrant up啟動三個虛擬機。其實vagrant完全可以一鍵部署全部K8s集群
https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
http://github.com/davidkbainbridge/k8s-playground
下面是vagrantfile,使用它來創建三個虛擬機,分別為k8s-node1,k8s-node2和k8s-node3.
Vagrant.configure("2") do |config|(1..3).each do |i|config.vm.define "k8s-node#{i}" do |node|# 設置虛擬機的Boxnode.vm.box = "centos/7"# 設置虛擬機的主機名node.vm.hostname="k8s-node#{i}"# 設置虛擬機的IPnode.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"# 設置主機與虛擬機的共享目錄# node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"# VirtaulBox相關配置node.vm.provider "virtualbox" do |v|# 設置虛擬機的名稱v.name = "k8s-node#{i}"# 設置虛擬機的內存大小v.memory = 4096# 設置虛擬機的CPU個數v.cpus = 4endendend end- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 進入到三個虛擬機,開啟root的密碼訪問權限
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 1
- 1
關于在"網絡地址轉換"的連接方式下,三個節點的eth0,IP地址相同的問題。
**問題描述:**查看k8s-node1的路由表:
# 查看默認的網卡 [root@k8s-node1 ~]# ip route show default via 10.0.2.2 dev eth0 proto dhcp metric 100 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101- 1
- 2
- 3
- 4
- 5
- 6
- 7
能夠看到路由表中記錄的是,通過端口eth0進行數據包的收發。
分別查看k8s-node1,k8s-node2和k8s-node3的eth0所綁定的IP地址,發現它們都是相同的,全都是10.0.2.15,這些地址是供kubernetes集群通信用的,區別于eth1上的IP地址,是通遠程管理使用的。
[root@k8s-node1 ~]# ip addr ... 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0valid_lft 84418sec preferred_lft 84418secinet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:a3:ca:c0 brd ff:ff:ff:ff:ff:ffinet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fea3:cac0/64 scope link valid_lft forever preferred_lft forever [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
**原因分析:**這是因為它們使用是端口轉發規則,使用同一個地址,通過不同的端口來區分。但是這種端口轉發規則在以后的使用中會產生很多不必要的問題,所以需要修改為NAT網絡類型。
解決方法:
- 選擇三個節點,然后執行“管理”->“全局設定”->“網絡”,添加一個NAT網絡。
- 分別修改每臺設備的網絡類型,并刷新重新生成MAC地址。
刷新一下MAC地址
1網絡是集群交互,2網絡是宿主交互
- 再次查看三個節點的IP
(3)設置Linux環境(三個節點都執行)
- 關閉防火墻
- 1
- 2
- 關閉seLinux
- 1
- 2
- 3
- 關閉swap
- 1
- 2
- 3
- 添加主機名與IP對應關系:
查看主機名:
hostname- 1
如果主機名不正確,可以通過“hostnamectl set-hostname <newhostname> :指定新的hostname”命令來進行修改。
vi /etc/hosts 10.0.2.4 k8s-node1 10.0.2.5 k8s-node2 10.0.2.6 k8s-node3- 1
- 2
- 3
- 4
將橋接的IPV4流量傳遞到iptables的鏈:
cat > /etc/sysctl.d/k8s.conf <<EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF- 1
- 2
- 3
- 4
- 5
- 6
- 7
應用規則:
sysctl --system- 1
疑難問題:遇見提示是只讀的文件系統,運行如下命令
mount -o remount rw /- 1
- date 查看時間(可選)
- 1
- 2
- 3
5)所有節點安裝docker、kubeadm、kubelet、kubectl
Kubenetes默認CRI(容器運行時)為Docker,因此先安裝Docker。
(1)安裝Docker
1、卸載之前的docker
sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
2、安裝Docker -CE
sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 # 設置docker repo的yum位置 sudo yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo# 安裝docker,docker-cli sudo yum -y install docker-ce docker-ce-cli containerd.io- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
3、配置docker加速
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
4、啟動Docker && 設置docker開機啟動
systemctl enable docker- 1
基礎環境準備好,可以給三個虛擬機備份一下;
(2)添加阿里與Yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
更多詳情見: https://developer.aliyun.com/mirror/kubernetes
(3)安裝kubeadm,kubelet和kubectl
yum list|grep kube- 1
安裝
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3- 1
開機啟動
systemctl enable kubelet && systemctl start kubelet- 1
查看kubelet的狀態:
systemctl status kubelet- 1
查看kubelet版本:
[root@k8s-node2 ~]# kubelet --version Kubernetes v1.17.3- 1
- 2
6)部署k8s-master
(1)master節點初始化
在Master節點上,創建并執行master_images.sh
#!/bin/bashimages=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1 )for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
查看100的內部通信端口
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 08:00:27:7e:dd:f5 brd ff:ff:ff:ff:ff:ffinet 10.0.2.4/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0valid_lft 512sec preferred_lft 512secinet6 fe80::a00:27ff:fe7e:ddf5/64 scope linkvalid_lft forever preferred_lft forever- 1
- 2
- 3
- 4
- 5
- 6
初始化kubeadm
kubeadm init \ --apiserver-advertise-address=10.0.2.4 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16- 1
- 2
- 3
- 4
- 5
- 6
注:
- –apiserver-advertise-address=10.0.2.21 :這里的IP地址是master主機的地址,為上面的eth0網卡的地址;
- ?
執行結果:
[root@k8s-node1 opt]# kubeadm init \ > --apiserver-advertise-address=10.0.2.15 \ > --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ > --kubernetes-version v1.17.3 \ > --service-cidr=10.96.0.0/16 \ > --pod-network-cidr=10.244.0.0/16 W0503 14:07:12.594252 10124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.17.3 [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0503 14:07:30.908642 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0503 14:07:30.911330 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.506521 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: sg47f3.4asffoi6ijb8ljhq [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy #表示kubernetes已經初始化成功了 Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \--discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb [root@k8s-node1 opt]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
由于默認拉取鏡像地址k8s.cr.io國內無法訪問,這里指定阿里云倉庫地址。可以手動按照我們的images.sh先拉取鏡像。
地址變為:registry.aliyuncs.com/googole_containers也可以。
科普:無類別域間路由(Classless Inter-Domain Routing 、CIDR)是一個用于給用戶分配IP地址以及在互聯網上有效第路由IP數據包的對IP地址進行歸類的方法。
拉取可能失敗,需要下載鏡像。
運行完成提前復制:加入集群的令牌。
(2)測試Kubectl(主節點執行)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config- 1
- 2
- 3
詳細部署文檔:https://kubernetes.io/docs/concepts/cluster-administration/addons/
kubectl get nodes #獲取所有節點- 1
目前Master狀態為notready。等待網絡加入完成即可。
journalctl -u kubelet #查看kubelet日志- 1
- 1
- 2
7)安裝POD網絡插件(CNI)
在master節點上執行按照POD網絡插件
kubectl apply -f \ https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml- 1
- 2
以上地址可能被墻,可以直接獲取本地已經下載的flannel.yml運行即可(https://blog.csdn.net/lxm1720161656/article/details/106436252 可以去下載),如:
[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
同時flannel.yml中指定的images訪問不到可以去docker hub找一個wget yml地址
vi 修改yml 所有amd64的地址修改了即可
等待大約3分鐘
kubectl get pods -n kube-system 查看指定名稱空間的pods
kubectl get pods -all-namespace 查看所有名稱空間的pods
$ ip link set cni0 down 如果網絡出現問題,關閉cni0,重啟虛擬機繼續測試
執行watch kubectl get pod -n kube-system -o wide 監控pod進度
等待3-10分鐘,完全都是running以后繼續
查看命名空間:
[root@k8s-node1 k8s]# kubectl get ns NAME STATUS AGE default Active 30m kube-node-lease Active 30m kube-public Active 30m kube-system Active 30m [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
查看master上的節點信息:
[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 34m v1.17.3 #status為ready才能夠執行下面的命令 [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
最后再次執行,并且分別在“k8s-node2”和“k8s-node3”上也執行這里命令:
kubeadm join 10.0.2.4:6443 --token bt3hkp.yxnpzsgji4a6edy7 \--discovery-token-ca-cert-hash sha256:64949994a89c53e627d68b115125ff753bfe6ff72a26eb561bdc30f32837415a- 1
- 2
- 1
- 2
- 3
- 4
- 5
- 6
監控pod進度
# 在master執行 watch kubectl get pod -n kube-system -o wide- 1
- 2
等到所有的status都變為running狀態后,再次查看節點信息:
[root@k8s-node1 ~]# kubectl get nodes; NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 3h50m v1.17.3 k8s-node2 Ready <none> 3h3m v1.17.3 k8s-node3 Ready <none> 3h3m v1.17.3 [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
8)加入kubenetes的Node節點
在node節點中執行,向集群中添加新的節點,執行在kubeadm init 輸出的kubeadm join命令;
確保node節點成功:
token過期怎么辦
kubeadm token create --print-join-command
9)入門操作kubernetes集群
1、在主節點上部署一個tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8- 1
獲取所有的資源:
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70mNAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 0/1 1 0 41sNAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
kubectl get pods -o wide 可以獲取到tomcat部署信息,能夠看到它被部署到了k8s-node3上了
[root@k8s-node1 k8s]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/tomcat6-5f7ccf4cb9-xhrr9 0/1 ContainerCreating 0 77s <none> k8s-node3 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m <none>NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/tomcat6 0/1 1 0 77s tomcat tomcat:6.0.53-jre8 app=tomcat6NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/tomcat6-5f7ccf4cb9 1 1 0 77s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=5f7ccf4cb9- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
查看node3節點上,下載了哪些鏡像:
[root@k8s-node3 k8s]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 ae853e93800d 14 months ago 116MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 2 years ago 52.6MB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 3 years ago 742kB tomcat 6.0.53-jre8 49ab0583115a 3 years ago 290MB- 1
- 2
- 3
- 4
- 5
- 6
- 7
查看Node3節點上,正在運行的容器:
[root@k8s-node3 k8s]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8a197fa41dd9 tomcat "catalina.sh run" About a minute ago Up About a minute k8s_tomcat_tomcat6-5f7ccf4cb9-xhrr9_default_81f186a8-4805-4bbb-8d77-3142269942ed_0 4074d0d63a88 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_tomcat6-5f7ccf4cb9-xhrr9_default_81f186a8-4805-4bbb-8d77-3142269942ed_0 db3faf3a280d ff281650a721 "/opt/bin/flanneld -…" 29 minutes ago Up 29 minutes k8s_kube-flannel_kube-flannel-ds-amd64-vcktd_kube-system_31ca3556-d6c3-48b2-b393-35ff7d89a078_0 be461b54cb4b registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 30 minutes ago Up 30 minutes k8s_kube-proxy_kube-proxy-ptq2t_kube-system_0e1f7df3-7204-481d-bf15-4b0e09cf0c81_0 88d1ab87f400 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-flannel-ds-amd64-vcktd_kube-system_31ca3556-d6c3-48b2-b393-35ff7d89a078_0 52be28610a02 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-proxy-ptq2t_kube-system_0e1f7df3-7204-481d-bf15-4b0e09cf0c81_0- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
在node1上執行:
[root@k8s-node1 k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 5m35s[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 163m kube-system coredns-546565776c-9sbmk 1/1 Running 0 3h52m kube-system coredns-546565776c-t68mr 1/1 Running 0 3h52m kube-system etcd-k8s-node1 1/1 Running 0 3h52m kube-system kube-apiserver-k8s-node1 1/1 Running 0 3h52m kube-system kube-controller-manager-k8s-node1 1/1 Running 0 3h52m kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 0 3h6m kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 3h24m kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 0 3h6m kube-system kube-proxy-7tkvl 1/1 Running 0 3h6m kube-system kube-proxy-mvlnk 1/1 Running 0 3h6m kube-system kube-proxy-sz2vz 1/1 Running 0 3h52m kube-system kube-scheduler-k8s-node1 1/1 Running 0 3h52m [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
從前面看到tomcat部署在Node3上,現在模擬因為各種原因宕機的情況,將node3關閉電源,觀察情況。
docker stop執行的時候,docker ps發現又有新的容器了,這是k8s又新建了,所以選擇關機node3
[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 79m v1.17.3 k8s-node2 Ready <none> 41m v1.17.3 k8s-node3 NotReady <none> 41m v1.17.3- 1
- 2
- 3
- 4
- 5
得等個幾分鐘才能容災恢復
[root@k8s-node1 k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4m16s 10.244.1.2 k8s-node2 <none> <none> tomcat6-5f7ccf4cb9-xhrr9 1/1 Terminating 1 22m 10.244.2.2 k8s-node3 <none> <none>- 1
- 2
- 3
- 4
2、暴露nginx訪問
在master上執行
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort- 1
pod的80映射容器的8080;server會帶來pod的80
查看服務:
[root@k8s-node1 k8s]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93m tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 8s- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
瀏覽器輸入:http://192.168.56.100:30055/ ,可以看到tomcat首頁
輸入下面命令可以看到pod和封裝pod 的service,pod是部署產生的,部署還有一個副本
[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h12mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h37m service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h3mNAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 4h30mNAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 4h30m- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
3、動態擴容測試
kubectl get deployment
[root@k8s-node1 ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat6 2/2 2 2 11h [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
應用升級: kubectl set image (–help查看幫助)
擴容:kubectl scale --replicas=3 deployment tomcat6
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
擴容了多份,所有無論訪問哪個node的指定端口,都可以訪問到tomcat6
http://192.168.56.101:30055/
http://192.168.56.102:30055/
縮容:kubectl scale --replicas=2 deployment tomcat6
[root@k8s-node1 ~]# kubectl scale --replicas=1 deployment tomcat6 deployment.apps/tomcat6 scaled[root@k8s-node1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h32m 10.244.1.2 k8s-node2 <none> <none>[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h33mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h58m service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h24mNAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 4h51mNAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 4h51m- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
4、以上操作的yaml獲取
參照k8s細節
5、刪除
kubectl get all
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
kubectl delete deploye/nginx
kubectl delete service/nginx-service
三、docker深入
四、K8s細節
1、kubectl文檔
? https://kubernetes.io/zh/docs/reference/kubectl/overview/
2、資源類型
https://kubernetes.io/zh/docs/reference/kubectl/overview/#資源類型
3、格式化輸出
https://kubernetes.io/zh/docs/reference/kubectl/overview/
所有?kubectl?命令的默認輸出格式都是人類可讀的純文本格式。要以特定格式向終端窗口輸出詳細信息,可以將?-o?或?--output?參數添加到受支持的?kubectl?命令中。
語法
kubectl [command] [TYPE] [NAME] -o=<output_format>- 1
根據?kubectl?操作,支持以下輸出格式:
| -o custom-columns= | 使用逗號分隔的自定義列列表打印表。 |
| -o custom-columns-file= | 使用 `` 文件中的自定義列模板打印表。 |
| -o json | 輸出 JSON 格式的 API 對象 |
| `-o jsonpath= | 打印?jsonpath?表達式定義的字段 |
| -o jsonpath-file= | 打印 `` 文件中?jsonpath?表達式定義的字段。 |
| -o name | 僅打印資源名稱而不打印任何其他內容。 |
| -o wide | 以純文本格式輸出,包含任何附加信息。對于 pod 包含節點名。 |
| -o yaml | 輸出 YAML 格式的 API 對象。 |
示例
在此示例中,以下命令將單個 pod 的詳細信息輸出為 YAML 格式的對象:
kubectl get pod web-pod-13je7 -o yaml- 1
請記住:有關每個命令支持哪種輸出格式的詳細信息,請參閱?kubectl?參考文檔。
–dry-run:
–dry-run=‘none’: Must be “none”, “server”, or “client”. If client strategy, only print the object that would be
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
值必須為none,server或client。如果是客戶端策略,則只打印該發送對象,但不發送它。如果服務器策略,提交服務器端請求而不持久化資源。
也就是說,通過–dry-run選項,并不會真正的執行這條命令。
[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yamlW0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. apiVersion: apps/v1 kind: Deployment metadata:creationTimestamp: nulllabels:app: tomcat6name: tomcat6 spec:replicas: 1selector:matchLabels:app: tomcat6strategy: {}template:metadata:creationTimestamp: nulllabels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcatresources: {} status: {} [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
實際上我們也可以將這個yaml輸出到文件,然后使用kubectl apply -f來應用它
#輸出到tomcat6.yaml [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml W0504 03:46:18.180366 11151 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.#修改副本數為3 [root@k8s-node1 ~]# cat tomcat6.yaml apiVersion: apps/v1 kind: Deployment metadata:creationTimestamp: nulllabels:app: tomcat6name: tomcat6 spec:replicas: 3 #修改副本數為3selector:matchLabels:app: tomcat6strategy: {}template:metadata:creationTimestamp: nulllabels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcatresources: {} status: {}#應用tomcat6.yaml [root@k8s-node1 ~]# kubectl apply -f tomcat6.yaml deployment.apps/tomcat6 created [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
查看pods:
[root@k8s-node1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 8s tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 8s tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 8s [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
查看某個pod的具體信息:
[root@k8s-node1 ~]# kubectl get pods tomcat6-7b84fb5fdc-5jh6t -o yaml- 1
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
命令參考
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
service的意義
3、Service的意義 1、部署一個nginx kubectl create deployment nginx --image=nginx2、暴露nginx訪問 kubectl expose deployment nginx --port=80 --type=NodePort統一應用訪問入口, Service管理一組Pod 防止Pod失聯(服務發現)、定義一組Pod的訪問策略 現在我們使用NodePort的方式暴露,這樣訪問每個節點的端口,都可以訪問各個Pod,如果節點宕機,就會出現問題。- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
前面我們通過命令行的方式,部署和暴露了tomcat,實際上也可以通過yaml的方式來完成這些操作。
#這些操作實際上是為了獲取Deployment的yaml模板 [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6-deployment.yaml W0504 04:13:28.265432 24263 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. [root@k8s-node1 ~]# ls tomcat6-deployment.yaml tomcat6-deployment.yaml [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
修改“tomcat6-deployment.yaml”內容如下:
apiVersion: apps/v1 kind: Deployment metadata:labels:app: tomcat6name: tomcat6 spec:replicas: 3selector:matchLabels:app: tomcat6template:metadata: labels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcat- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
將這段輸出和“tomcat6-deployment.yaml”進行拼接,表示部署完畢并進行暴露服務:
apiVersion: apps/v1 kind: Deployment metadata:labels:app: tomcat6name: tomcat6 spec:replicas: 3selector:matchLabels:app: tomcat6template:metadata: labels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcat --- apiVersion: v1 kind: Service metadata:creationTimestamp: nulllabels:app: tomcat6name: tomcat6 spec:ports:- port: 80protocol: TCPtargetPort: 8080selector:app: tomcat6type: NodePort- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
部署并暴露服務
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml deployment.apps/tomcat6 created service/tomcat6 created- 1
- 2
- 3
查看服務和部署信息
[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 4s pod/tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 5s pod/tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 4sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h service/tomcat6 NodePort 10.96.147.210 <none> 80:30172/TCP 4sNAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 3/3 3 3 5sNAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 5s [root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
訪問node1,node1和node3的30172端口:
[root@k8s-node1 ~]# curl -I http://192.168.56.{100,101,102}:30172/ HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMTHTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMTHTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMT[root@k8s-node1 ~]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
Ingress
通過Ingress發現pod進行關聯。基于域名訪問
通過Ingress controller實現POD負載均衡
支持TCP/UDP 4層負載均衡和HTTP 7層負載均衡
- service管理多個pod
- Ingress管理多個service
步驟:
(1)部署Ingress controller
執行“k8s/ingress-controller.yaml”
[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created daemonset.apps/nginx-ingress-controller created service/ingress-nginx created [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
查看
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 16m default tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 16m default tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 16m ingress-nginx nginx-ingress-controller-9q6cs 0/1 ContainerCreating 0 40s ingress-nginx nginx-ingress-controller-qx572 0/1 ContainerCreating 0 40s kube-system coredns-546565776c-9sbmk 1/1 Running 1 14h kube-system coredns-546565776c-t68mr 1/1 Running 1 14h kube-system etcd-k8s-node1 1/1 Running 1 14h kube-system kube-apiserver-k8s-node1 1/1 Running 1 14h kube-system kube-controller-manager-k8s-node1 1/1 Running 1 14h kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 2 13h kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 2 14h kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 1 13h kube-system kube-proxy-7tkvl 1/1 Running 1 13h kube-system kube-proxy-mvlnk 1/1 Running 2 13h kube-system kube-proxy-sz2vz 1/1 Running 1 14h kube-system kube-scheduler-k8s-node1 1/1 Running 1 14h [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
這里master節點負責調度,具體執行交給node2和node3來完成,能夠看到它們正在下載鏡像
(2)創建Ingress規則
apiVersion: extensions/v1beta1 kind: Ingress metadata:name: web spec:rules:- host: tomcat6.kubenetes.comhttp:paths: - backend: serviceName: tomcat6servicePort: 80- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 1
- 2
- 3
- 4
- 5
- 6
- 7
修改本機的hosts文件,添加如下的域名轉換規則:
192.168.56.102 tomcat6.kubenetes.com- 1
測試: http://tomcat6.kubenetes.com/
并且集群中即便有一個節點不可用,也不影響整體的運行。
安裝kubernetes可視化界面——DashBoard
1、部署DashBoard
$ kubectl appy -f kubernetes-dashboard.yaml- 1
文件在“k8s”源碼目錄提供
2、暴露DashBoard為公共訪問
默認DashBoard只能集群內部訪問,修改Service為NodePort類型,暴露到外部
kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 3001selector:k8s-app: kubernetes-dashboard- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
訪問地址:http://NodeIP:30001
3、創建授權賬號
$ kubectl create serviceaccount dashboar-admin -n kube-sysem- 1
- 1
- 1
使用輸出的token登錄dashboard
kubesphere
默認的dashboard沒啥用,我們用kubesphere可以打通全部的devops鏈路,kubesphere集成了很多套件,集群要求比較高
https://kubesphere.io
kuboard也很不錯,集群要求不高
https://kuboard.cn/support/
1、簡潔
kubesphere是一款面向云原聲設計的開源項目,在目前主流容器調度平臺kubernets智商構建的分布式多用戶容器管理平臺,提供簡單易用的操作界面以及向導式操作方式,在降低用戶使用容器調度平臺學習成本的同時,極大降低開發、測試、運維的日常工作的復雜度。
2、安裝前提提交
1、安裝helm(master節點執行)
helm是kubernetes的包管理器。包管理器類似于在Ubuntu中使用的apt,centos中的yum或者python中的pip一樣,能夠快速查找,下載和安裝軟件包。Helm有客戶端組件helm和服務端組件Tiller組成,能夠將一組K8S資源打包統一管理,是查找、共享和使用為Kubernetes構建的軟件的最佳方式。
1)安裝
curl -L https://git.io/get_helm.sh|bash- 1
由于被墻的原因,使用我們給定的get_helm.sh。
[root@k8s-node1 k8s]# ll total 68 -rw-r--r-- 1 root root 7149 Feb 27 01:58 get_helm.sh -rw-r--r-- 1 root root 6310 Feb 28 05:16 ingress-controller.yaml -rw-r--r-- 1 root root 209 Feb 28 13:18 ingress-demo.yml -rw-r--r-- 1 root root 236 May 4 05:09 ingress-tomcat6.yaml -rwxr--r-- 1 root root 15016 Feb 26 15:05 kube-flannel.yml -rw-r--r-- 1 root root 4737 Feb 26 15:38 kubernetes-dashboard.yaml -rw-r--r-- 1 root root 3841 Feb 27 01:09 kubesphere-complete-setup.yaml -rw-r--r-- 1 root root 392 Feb 28 11:33 master_images.sh -rw-r--r-- 1 root root 283 Feb 28 11:34 node_images.sh -rw-r--r-- 1 root root 1053 Feb 28 03:53 product.yaml -rw-r--r-- 1 root root 931 May 3 10:08 Vagrantfile [root@k8s-node1 k8s]# sh get_helm.sh Downloading https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin helm installed into /usr/local/bin/helm tiller installed into /usr/local/bin/tiller Run 'helm init' to configure helm. [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
2)驗證版本
helm version- 1
3)創建權限(master執行)
創建helm-rbac.yaml,寫入如下內容
apiVersion: v1 kind: ServiceAccount metadata:name: tillernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: tiller roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
應用配置:
[root@k8s-node1 k8s]# kubectl apply -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
2、安裝Tilller(Master執行)
1、初始化
[root@k8s-node1 k8s]# helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300 Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/ [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
–tiller-image 指定鏡像,否則會被墻,等待節點上部署的tiller完成即可。
[root@k8s-node1 k8s]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-546565776c-9sbmk 1/1 Running 3 23h coredns-546565776c-t68mr 1/1 Running 3 23h etcd-k8s-node1 1/1 Running 3 23h kube-apiserver-k8s-node1 1/1 Running 3 23h kube-controller-manager-k8s-node1 1/1 Running 3 23h kube-flannel-ds-amd64-5xs5j 1/1 Running 4 22h kube-flannel-ds-amd64-6xwth 1/1 Running 5 23h kube-flannel-ds-amd64-fvnvx 1/1 Running 4 22h kube-proxy-7tkvl 1/1 Running 3 22h kube-proxy-mvlnk 1/1 Running 4 22h kube-proxy-sz2vz 1/1 Running 3 23h kube-scheduler-k8s-node1 1/1 Running 3 23h kubernetes-dashboard-975499656-jxczv 0/1 ImagePullBackOff 0 7h45m tiller-deploy-8cc566858-67bxb 1/1 Running 0 31s [root@k8s-node1 k8s]#- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
查看集群的所有節點信息:
kubectl get node -o wide- 1
- 1
- 2
- 3
- 4
- 5
- 6
2、測試
helm install stable/nginx-ingress --name nginx-ingress- 1
最小化安裝 KubeSphere
若集群可用 CPU > 1 Core 且可用內存 > 2 G,可以使用以下命令最小化安裝 KubeSphere:
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml- 1
提示:若您的服務器提示無法訪問 GitHub,可將?kubesphere-minimal.yaml?或?kubesphere-complete-setup.yaml?文件保存到本地作為本地的靜態文件,再參考上述命令進行安裝。
- 1
說明:安裝過程中若遇到問題,也可以通過以上日志命令來排查問題
總結
以上是生活随笔為你收集整理的【谷粒商城】集群篇-k8s(4/4)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: linux可用机场客户端,Linux系统
- 下一篇: uoni扫地机器人好用吗_扫地机器人好用