HAProxy实现负载均衡及高可用集群(corosync+pacemaker)
?
一、haproxy
haproxy 是一款提供高可用性、負載均衡以及基于TCP(第四層)和HTTP(第七層)應(yīng)用的代理軟件,支持虛擬主機,它是免費、快速并且可靠的一種解決方案。 HAProxy特別適用于那些負載特大的web站點,這些站點通常又需要會話保持或七層處理。HAProxy運行在時下的硬件上,完全可以支持數(shù)以萬計的 并發(fā)連接。并且它的運行模式使得它可以很簡單安全的整合進您當前的架構(gòu)中, 同時可以保護你的web服務(wù)器不被暴露到網(wǎng)絡(luò)上。
實驗環(huán)境:
虛擬機配置可以參考上篇文章
| server1 | 172.25.1.1 | 作為haproxy服務(wù)器及集群管理服務(wù)器 |
| server2 | 172.25.1.2 | 負載均衡(后端服務(wù)器) |
| server3 | 172.25.1.3 | 負載均衡(后端服務(wù)器) |
| server4 | 172.25.1.4 | 集群管理服務(wù)器 |
?
二、安裝配置haproxy
[root@server1 ~]# yum install haproxy -y配置vim /etc/haproxy/haproxy.cfg
#--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #---------------------------------------------------------------------#--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global# to have these messages end up in /var/log/haproxy.log you will# need to:## 1) configure syslog to accept network log events. This is done# by adding the '-r' option to the SYSLOGD_OPTIONS in# /etc/sysconfig/syslog## 2) configure local2 events to go to the /var/log/haproxy.log# file. A line like the following can be added to# /etc/sysconfig/syslog## local2.* /var/log/haproxy.log#log 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000stats uri /status#--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:80 # acl url_static path_beg -i /static /images /javascript /stylesheets # acl url_static path_end -i .jpg .gif .png .css .js # # use_backend static if url_staticdefault_backend app#--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- #backend static # balance roundrobin # server static 127.0.0.1:4331 check#--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend appbalance roundrobinserver app1 172.25.1.2:80 checkserver app2 172.25.1.3:80 check對配置文件修改如下:?
?
配置完成之后開啟服務(wù)并查看服務(wù)狀態(tài) [root@server1 ~]# systemctl start haproxy.service [root@server1 ~]# netstat -antlp?測試:
可以通過網(wǎng)頁進行查看后端服務(wù)器的狀態(tài)?
?
?roundrobin算法(根據(jù)服務(wù)器權(quán)重輪詢的算法,可以自定義權(quán)重,它支持慢啟動,并能在運行時修改權(quán)重,所以是一種動態(tài)算法。最多支持4095臺后端主機。)
?
source 調(diào)度算法(對請求的源IP地址進行hash處理,根據(jù)hash運算處理結(jié)果調(diào)度至后端服務(wù)器。可使固定IP的請求始終調(diào)度至統(tǒng)一服務(wù)器。)?
當關(guān)閉某一個服務(wù)器的http服務(wù)時如下:說明其具有對后端服務(wù)器健康檢查的功能?
status的加密訪問
?三、HAProxy的日志管理
[root@server1 ~]# vim /etc/security/limits.conf haproxy - nofile 4096 [root@server1 ~]# vim /etc/sysconfig/rsyslog # Options for rsyslogd # Syslogd options are deprecated since rsyslog v3. # If you want to use them, switch to compatibility mode 2 by "-c 2" # See rsyslogd(8) for more details SYSLOGD_OPTIONS="-r" [root@server1 ~]# vim /etc/rsyslog.conf [root@server1 ~]# systemctl restart rsyslog.service?
測試:?
四、haproxy 的動態(tài)和靜態(tài)分離
?
[root@server3 html]# mkdir images測試:?
設(shè)置訪問黑名單:
當訪問失敗時,給他定向到其他頁面,例如:?
?另外一種方式(重定向)
五、HAProxy的讀寫分離
?server2和server3同樣操作
[root@server2 html]# yum install php -y [root@server3 html]# yum install php -y [root@server3 html]# vim index.php <html> <body><form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form></body> </html> [root@server3 html]# vim upload_file.php [root@server3 html]# mkdir upload [root@server3 html]# chmod 777 upload <?php if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 2000000)){if ($_FILES["file"]["error"] > 0){echo "Return Code: " . $_FILES["file"]["error"] . "<br />";}else{echo "Upload: " . $_FILES["file"]["name"] . "<br />";echo "Type: " . $_FILES["file"]["type"] . "<br />";echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />";echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />";if (file_exists("upload/" . $_FILES["file"]["name"])){echo $_FILES["file"]["name"] . " already exists. ";}else{move_uploaded_file($_FILES["file"]["tmp_name"],"upload/" . $_FILES["file"]["name"]);echo "Stored in: " . "upload/" . $_FILES["file"]["name"];}}} else{echo "Invalid file";} ?>注意:需要重啟httpd?
測試:?
六、corosync+pacemaker部署
corosync是集群框架引擎程序,pacemaker是高可用集群資源管理器
下載及配置
[root@server4 ~]# yum install haproxy.x86_64 [root@server4 ~]# ssh-keygen [root@server4 ~]# ssh-copy-id server1 [root@server4 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python [root@server4 ~]# ssh server1 yum install -y pacemaker pcs psmisc policycoreutils-python [root@server4 yum.repos.d]# systemctl enable --now pcsd.service [root@server4 yum.repos.d]# ssh server1 systemctl enable --now pcsd.service [root@server1 haproxy]# scp /etc/haproxy/haproxy.cfg server4:/etc/haproxy/ [root@server4 ~]# passwd hacluster [root@server4 ~]# ssh server1 passwd hacluster ## 給用戶密碼## [root@server4 ~]# pcs cluster auth server1 server4 [root@server4 ~]# pcs cluster setup --name mycluster server ##在同一個節(jié)點上使用pcs集群設(shè)置來生成和同步crosync配置## [root@server4 ~]# pcs cluster start --all ##啟動集群## [root@server4 ~]# pcs cluster enable --all ##開機自啟動##?
[root@server4 ~]# pcs status ##查看集群狀態(tài)## [root@server4 ~]# crm_verify -LV ##查看集群狀態(tài)報錯## [root@server4 ~]# pcs property set stonith-enabled=false ##解決剛才集群狀態(tài)報錯問題##Add a Resource
[root@server4 ~]# pcs resource --help [root@server4 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.25.1.100 op monitor interval=30s [root@server4 ~]# pcs cluster stop server1 ##當停止server1時查看集群狀態(tài)## [root@server4 ~]# pcs resource agents systemd | grep haproxy ##查看資源管理器中有沒有haproxy程序管理## [root@server4 ~]# pcs resource create haproxy systemd:haproxy op monitor interval=60s ##將haproxy與集群建立連接## [root@server4 ~]# pcs resource group add hagroup ClusterIP haproxy ##建立資源管理組,約束資源,控制資源啟動順序,使其運行在統(tǒng)一服務(wù)器上## 如果ClusterIP或者haproxy服務(wù)停掉,集群就會自動重啟服務(wù)或者添加ClusterIP [root@server4 ~]# systemctl stop haproxy.service測試:
如果將正在使用的服務(wù)器的網(wǎng)卡down掉,他會自動跳到集群另外一臺服務(wù)器上 [root@server4 ~]# ip link set down eth0測試:
[root@server4 ~]# yum install -y fence-virt.x86_64 [root@server4 ~]# ssh server1 yum install -y fence-virt.x86_64 ## [root@server1 ~]# echo c > /proc/sysrq-trigger ##使內(nèi)核崩潰##測試:
[root@Sun_s ~]# dnf install fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd.x86_64 -y ##在真機下載## [root@Sun_s ~]# fence_virtd -c Module search path [/usr/lib64/fence-virt]: Available backends:libvirt 0.3 Available listeners:multicast 1.2Listener modules are responsible for accepting requests from fencing clients.Listener module [multicast]: The multicast listener module is designed for use environments where the guests and hosts may communicate over a network using multicast.The multicast address is the address that a client will use to send fencing requests to fence_virtd.Multicast IP Address [225.0.0.12]: Using ipv4 as family.Multicast IP Port [1229]: Setting a preferred interface causes fence_virtd to listen only on that interface. Normally, it listens on all interfaces. In environments where the virtual machines are using the host machine as a gateway, this *must* be set (typically to virbr0). Set to 'none' for no interface.Interface [virbr0]: br0The key file is the shared key information which is used to authenticate fencing requests. The contents of this file must be distributed to each physical host and virtual machine within a cluster.Key File [/etc/cluster/fence_xvm.key]: Backend modules are responsible for routing requests to the appropriate hypervisor or management layer.Backend module [libvirt]: The libvirt backend module is designed for single desktops or servers. Do not use in environments where virtual machines may be migrated between hosts.Libvirt URI [qemu:///system]: Configuration complete.=== Begin Configuration === backends {libvirt {uri = "qemu:///system";}}listeners {multicast {port = "1229";family = "ipv4";interface = "br0";address = "225.0.0.12";key_file = "/etc/cluster/fence_xvm.key";}}fence_virtd {module_path = "/usr/lib64/fence-virt";backend = "libvirt";listener = "multicast"; }=== End Configuration === Replace /etc/fence_virt.conf with the above [y/N]? y注意:只有網(wǎng)絡(luò)那里需要手動輸入你的網(wǎng)橋名字,其余全部回車即可
[root@Sun_s ~]# cd /etc/ [root@Sun_s etc]# mkdir cluster/ ##由于這個目錄沒有,所以需要手動創(chuàng)建## [root@Sun_s cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1 ##生成密鑰## [root@Sun_s cluster]# systemctl restart fence_virtd.service [root@Sun_s cluster]# scp fence_xvm.key root@172.25.1.4:/etc/cluster/ [root@Sun_s cluster]# scp fence_xvm.key root@172.25.1.1:/etc/cluster/ ##將密鑰傳給資源管理的服務(wù)器## [root@server4 ~]# mkdir /etc/cluster [root@server4 ~]# ssh server1 mkdir /etc/cluster ##創(chuàng)建目錄存放密鑰## [root@server4 ~]# pcs stonith create vmfence fence_xvm pcmk_host_map="server1:sun1;server4:sun4" op monitor interval=60s “將主機名和設(shè)備對應(yīng)關(guān)系”加到集群中 [root@server4 ~]# pcs property set stonith-enabled=true ##將stonith啟用##測試:
[root@server1 ~]# echo c > /proc/sysrq-trigger ##將server1模擬系統(tǒng)崩潰##此時打開server1的虛擬窗口觀察變化?
?
總結(jié)
以上是生活随笔為你收集整理的HAProxy实现负载均衡及高可用集群(corosync+pacemaker)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: LVS+Keepalive 实现负载均衡
- 下一篇: nginx+php+memcache高速