自动化运维之k8s——Helm、普罗米修斯、EFK日志管理、k8s高可用集群(未完待续)
一、k8s高可用集群(3.12日課)
幾種常見的集群結(jié)構(gòu)
1、堆疊的 etcd 拓?fù)?/span>
2、?外部 etcd 拓?fù)?/span>
3、外部 etcd 拓?fù)?load balancer = lvs + keepalived)
4、外部 etcd 拓?fù)?負(fù)載均衡器置于node上)
以第三種方式:外部 etcd 拓?fù)?load balancer = lvs + keepalived)結(jié)構(gòu)為例
(1)搭建高可用平臺(tái):新建兩個(gè)虛擬機(jī)server5和server6,作為高可用主機(jī)
第一步:安裝haproxy
#在server5和server6上安裝haproxy yum install -y haproxy修改server5/6的haproxy的配置文件
注意:前臺(tái)端口設(shè)置的是6443,這是因?yàn)閗8s集群的對外暴露端口是6443,設(shè)為一樣的方便。
在8000端口上設(shè)置“狀態(tài)頁”和用戶名密碼。
vim /etc/haproxy/haproxy.cfg1 #---------------------------------------------------------------------2 # Example configuration for a possible web application. See the3 # full configuration options online.4 #5 # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt6 #7 #---------------------------------------------------------------------8 9 #---------------------------------------------------------------------10 # Global settings11 #---------------------------------------------------------------------12 global13 # to have these messages end up in /var/log/haproxy.log you will14 # need to:15 #16 # 1) configure syslog to accept network log events. This is done17 # by adding the '-r' option to the SYSLOGD_OPTIONS in18 # /etc/sysconfig/syslog19 #20 # 2) configure local2 events to go to the /var/log/haproxy.log21 # file. A line like the following can be added to22 # /etc/sysconfig/syslog23 #24 # local2.* /var/log/haproxy.log25 #26 log 127.0.0.1 local227 28 chroot /var/lib/haproxy29 pidfile /var/run/haproxy.pid30 maxconn 400031 user haproxy32 group haproxy33 daemon34 35 # turn on stats unix socket36 stats socket /var/lib/haproxy/stats37 38 #---------------------------------------------------------------------39 # common defaults that all the 'listen' and 'backend' sections will40 # use if not designated in their block41 #---------------------------------------------------------------------42 defaults43 mode http44 log global45 option httplog46 # option dontlognull47 option http-server-close48 # option forwardfor except 127.0.0.0/849 option redispatch50 retries 351 timeout http-request 10s52 timeout queue 1m53 timeout connect 10s54 timeout client 1m55 timeout server 1m56 timeout http-keep-alive 10s57 timeout check 10s58 maxconn 300059 60 listen stats61 bind *:800062 stats uri /status63 stats auth admin:westos64 #---------------------------------------------------------------------65 # main frontend which proxys to the backends66 #---------------------------------------------------------------------67 frontend k8s *:644368 mode tcp69 # acl url_static path_beg -i /static /images /javascript /stylesheets70 # acl url_static path_end -i .jpg .gif .png .css .js71 72 # use_backend static if url_static73 default_backend apiserver74 75 #---------------------------------------------------------------------76 # round robin balancing between the various backends77 #---------------------------------------------------------------------78 backend apiserver79 mode tcp80 balance roundrobin81 server api1 172.25.254.2:6443 check82 server api2 172.25.254.3:6443 check83 server api3 172.25.254.4:6443 check在server5/6上重啟服務(wù)
systemctl restart haproxy.service瀏覽器測試:
兩臺(tái)主機(jī)互不干擾,都可以監(jiān)視后端的k8s集群中主機(jī)的狀態(tài)。
測試完成后,停止服務(wù),
第二步:添加高可用——設(shè)置vip
由于默認(rèn)的軟件源目錄中不包含高可用的軟件包,所以需要在server5/6上先設(shè)置高可用的軟件源:
[root@server5 ~]# cat /etc/yum.repos.d/dvd.repo [dvd] name=rhel7.6 baseurl=http://172.25.254.50/rhel7.6 gpgcheck=0[HighAvailability] name=rhel7.6 HighAvailability baseurl=http://172.25.254.50/rhel7.6/addons/HighAvailability gpgcheck=0安裝集群軟件并配置
#在server5/6安裝集群軟件包 yum install -y pacemaker pcs psmisc policycoreutils-python#在server5/6設(shè)置服務(wù)開機(jī)自啟動(dòng) systemctl enable --now pcsd.service#在server5/6為hacluster創(chuàng)建密碼,這個(gè)密碼將為后邊創(chuàng)建虛擬機(jī)使用 echo westos | passwd --stdin hacluster#在server5上認(rèn)證高可用節(jié)點(diǎn) [root@server5 ~]# pcs cluster auth server5 server6 Username: hacluster Password: server5: Authorized server6: Authorized#在server5上,將兩個(gè)節(jié)點(diǎn)加入“mycluster”集群中 [root@server5 ~]# pcs cluster setup --name mycluster server5 server6 Destroying cluster on nodes: server5, server6... server5: Stopping Cluster (pacemaker)... server6: Stopping Cluster (pacemaker)... #啟動(dòng)集群內(nèi)所有節(jié)點(diǎn) pcs cluster start --all#設(shè)置集群內(nèi)所有節(jié)點(diǎn)開機(jī)自啟動(dòng) pcs cluster enable --all #關(guān)閉stonith-enabled,防止報(bào)錯(cuò) [root@server5 ~]# pcs property set stonith-enabled=false [root@server5 ~]# crm_verify -LV?注意:如果<crm_verify -LV>顯示有報(bào)錯(cuò),集群將無法正常啟動(dòng)。所以這里先關(guān)閉stonith-enabled(禁止掛載存儲(chǔ)系統(tǒng)),防止因報(bào)錯(cuò)而無法啟動(dòng)集群
#在server5上創(chuàng)建資源,并指定IP為 pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.254.200 op monitor interval=30s#查看資源狀態(tài) [root@server5 ~]# pcs status Cluster name: mycluster Stack: corosync Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum Last updated: Sun Mar 13 17:59:03 2022 Last change: Sun Mar 13 17:58:57 2022 by root via cibadmin on server5 #創(chuàng)建haproxy資源 pcs resource create haproxy systemd:haproxy op monitor interval=60s#查看pcs狀態(tài) pcs status#查看服務(wù)開機(jī)自啟動(dòng)狀態(tài) systemctl is-enabled haproxy.service #列出agents下關(guān)于systemed的所有資源 pcs resource agents systemd pcs resource describe systemd:haproxy?默認(rèn)創(chuàng)建好haproxy資源后,會(huì)均衡到server6節(jié)點(diǎn)上,而VIP設(shè)置在server5上,因此還是不對,解決方法是新建一個(gè)組,將haproxy和vip放在一起。?
#新建一個(gè)hagroup組 pcs resource group add hagroup vip haproxy新建一個(gè)haproxy組,包含vip和haproxy?
在瀏覽器測試:
模擬問題:當(dāng)server5掛掉后,會(huì)自動(dòng)切換到server6上,并且用戶在瀏覽器上不會(huì)看到變化。?
第三步:創(chuàng)建k8s集群并加入worker
注意:更改網(wǎng)絡(luò)組件時(shí),需要先刪除</etc/cni/net.d/>目錄下的文件,防止之前的網(wǎng)絡(luò)組件影響
總結(jié)
以上是生活随笔為你收集整理的自动化运维之k8s——Helm、普罗米修斯、EFK日志管理、k8s高可用集群(未完待续)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 投资黄金,当然选贸易场行员平台真宝金业,
- 下一篇: 车后窗的搞笑标语