MySQL MMM多主多从搭建
一.MySQL高可用簡介
MMM(Master-Master Replication mananger for mysql),由一個管理端(monitor)和多個代理端(agent)構成。通過MMM可以實現監控和管理Mysql主主復制和服務狀態,同時也可監控多個Slave節點的復制以及運行狀態,并且可以做到任何節點發生故障時實現自動化切換的功能。 
 MMM套件三個主要腳本: 
 mmm_mond:監控進程,運行在管理節點,主要負責對所有數據庫的監控工作,同時決定和處理所有節點的角色切換。 
 mmm_agent:代理進程,運行在每臺Mysql服務器,完成監控的測試工作和執行遠程服務設置。 
 mmm_control:管理腳本,查看和管理集群運行狀態,同時管理mmm_mond進程。
二.MMM典型應用架構
三.MMM雙主多從Mysql架構配置
本文環境:
主庫:CentOS6.7 x64 192.168.106.106 mysql-5.7 主庫:CentOS6.7 x64 192.168.106.107 mysql-5.7 備庫:CentOS6.7 x64 192.168.106.109 mysql-5.7 備庫:CentOS6.7 x64 192.168.106.110 mysql-5.7 管理:CentOS6.7 x64 192.168.106.148MMM服務器角色對應關系:
node1(主1) 192.168.106.106 db1 1 node2(主2) 192.168.106.107 db2 2 node3(從1) 192.168.106.109 db3 11 node4(從1) 192.168.106.110 db4 12 node5(監控) 192.168.106.148 mon -配置完成后,使用下面的VIP訪問MySQL Cluster(下面是虛擬ip)。
192.168.106.211 writer 192.168.106.212 reader 192.168.106.213 reader 192.168.106.214 reader 192.168.106.215 reader修改:/etc/sysconfig/network-scripts/ifcfg-ens33中的內容如下: 
 
四、雙主環境,以及從服務器環境準備
1,node1(主1) /etc/my.cnf
[mysqld] server-id = 1 binlog-format = ROW log-bin = master-bin log-bin-index = master-bin.index log-slave-updates = true auto_increment_offset = 1 auto_increment_increment = 2 systemctl restart mysqld [root@localhost ~]# mysql -uroot -p mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 120 | | | | +-------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec)mysql>2. node1(主2) /etc/my.cnf
[mysqld] server-id = 2 binlog-format = ROW log-bin = master-bin log-bin-index = master-bin.index log-slave-updates = true auto_increment_offset = 2 auto_increment_increment = 2systemctl restart mysqld [root@localhost ~]# mysql -uroot -p mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 120 | | | | +-------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec)mysql>3. node1,node2 雙主服務器配置同步復制帳號
grant replication slave on *.* to 'repl'@'%' identified by '123456'; flush privileges;執行完成之后,發現再次執行show master status;發現有變化了。都變成了:
mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 407 | | | | +-------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec)mysql>4, node3(從)/etc/my.cnf
[mysqld] binlog-format = ROW log-bin = mysql-bin relay-log = slave-relay-bin relay-log-index = slave-relay-bin.index log-slave-updates = true server-id = 11 skip-name-resolve5, node4(從)/etc/my.cnf
[mysqld] binlog-format = ROW log-bin = mysql-bin relay-log = slave-relay-bin relay-log-index = slave-relay-bin.index log-slave-updates = true server-id = 12 skip-name-resolve6. node1從庫配置同步過程 
 (1) 配置同步,手動執行同步參數,該配置會寫入master.info文件中。
7. node2從庫配置同步過程 
 (1) 配置同步,手動執行同步參數,該配置會寫入master.info文件中。
8. 測試雙主相互復制 
 (1) node1上創建數據庫
(2) node2上的操作如下:
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | +--------------------+ 4 rows in set (0.00 sec)mysql> use mydb; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -ADatabase changed mysql> show tables; +----------------+ | Tables_in_mydb | +----------------+ | teacher | +----------------+ 1 row in set (0.00 sec)mysql> select * from teacher; +----+-------+---------------+------------+ | id | name | address | year | +----+-------+---------------+------------+ | 1 | allen | 飛數科技1 | 2005-10-10 | | 2 | jack | 飛數科技2 | 2005-12-23 | +----+-------+---------------+------------+ 2 rows in set (0.00 sec)mysql>Database changed CREATE TABLE `user` ( `id` varchar(20) NOT NULL, `username` varchar(20) NOT NULL, `password` char(32) NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO user VALUES ('1', 'koumm', '123456');五、配置node3, node4同步node1(主)
1,node1主庫鎖表,導數據庫 
 (1) 主庫鎖表
(2) 主庫備份
[root@master ~]# mysqldump -uroot -p -B mydb > mydb.sql說明:-B參數有建庫語句。
(3) 主庫解開鎖表功能
mysql> unlock tables; Query OK, 0 rows affected (0.00 sec) mysql>2. node3,node4從庫導入數據庫
# mysql -uroot -p < mydb.sql3. node3,node4 從庫配置同步過程 
 (1) 配置同步,手動執行同步參數,該配置會寫入master.info文件中。
可以簡單測試一下同步數據情況。
六、配置MMM過程
1, 安裝mmm軟件包 
 (1) node1,node2,node3,node4,node5節點 
(2) node5(監控節點)
# yum install mysql-mmm mysql-mmm-agent mysql-mmm-tools mysql-mmm-monitor(3) node1,node2,node3,node4 (mysql節點)
# yum install mysql-mmm-agent2,所有的MySQL節點添加以下兩個用戶,這里可以在主庫做該操作,會自動同步到其它節點。
mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.106.%' identified by '123456'; mysql> grant super, replication client, process on *.* to 'mmm_agent'@'192.168.106.%' identified by '123456'; mysql> flush privileges;3,所有主機node1,node2,node3,node4,node5上創建如下文件
[root@node1 ~]# vi /etc/mysql-mmm/mmm_common.conf active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mysql-mmm/mmm_agentd.pid bin_path /usr/libexec/mysql-mmm/ replication_user repl replication_password 123456 agent_user mmm_agent agent_password 123456 </host> <host db1> ip 192.168.106.106 mode master peer db2 </host> <host db2> ip 192.168.106.107 mode master peer db1 </host> <host db3> ip 192.168.106.109 mode slave </host> <host db4> ip 192.168.106.110 mode slave </host> <role writer> hosts db1, db2 ips 192.168.106.211 mode exclusive </role> <role reader> hosts db1, db2, db3, db4 ips 192.168.106.212,192.168.106.213,192.168.106.214,192.168.106.215 mode balanced </role>復制該配置文件到所有節點上:
[root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node2:/etc/mysql-mmm/ [root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node3:/etc/mysql-mmm/ [root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node4:/etc/mysql-mmm/ [root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node5:/etc/mysql-mmm/4. 配置agent代理節點,所有mysql主從服務器均為代理節點 
 (1) node1,node2,node3,node4 的MySQL節點配置mmm_agent.conf
(2) 默認為啟用,可以不用修改
# cat /etc/default/mysql-mmm-agent # mysql-mmm-agent defaults ENABLED=1(3) mysql節點啟動服務
# chkconfig mysql-mmm-agent on # /etc/init.d/mysql-mmm-agent start5,配置MMM管理監控節點node5 
 (1) 配置監控 
(2) mysql節點啟動服務
# chkconfig mysql-mmm-monitor on # /etc/init.d/mysql-mmm-monitor start6, 查看狀態 
 (1) 管理服務器上查看狀態
1、 設置虛擬ip 
 2、 /usr/sbin/啟動mmm_agentd
解決辦法是:
mysql> stop slave; mysql> set global sql_slave_skip_counter=1; mysql> start slave; mysql> show slave status;(2) 服務器的啟動VIP地址
[root@node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ff:33:6a brd ff:ff:ff:ff:ff:ff inet 192.168.106.106/24 brd 192.168.106.255 scope global eth0 inet 192.168.106.215/32 scope global eth0 inet 192.168.106.211/32 scope global eth0 inet6 fe80::20c:29ff:feff:336a/64 scope link valid_lft forever preferred_lft forever [root@node1 ~]# [root@node2 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.106.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:8c:60:58 brd ff:ff:ff:ff:ff:ff inet 192.168.106.106/24 brd 192.168.106.255 scope global eth0 inet 192.168.106.213/32 scope global eth0 inet6 fe80::20c:29ff:fe8c:6058/64 scope link valid_lft forever preferred_lft forever [root@node2 ~]#服務器讀寫采有VIP地址進行讀寫,出現故障時VIP會漂移到其它節點,由其它節點提供服務。
七、MMM高可用測試
首先查看整個集群的狀態,可以看到整個集群狀態正常。
[root@node5 ~]# mmm_control show db1(192.168.106.106) master/ONLINE. Roles: reader(192.168.106.212), writer(192.168.106.211)db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.213)db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.215)db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.214) [root@192.168.106.30 ~]#1. 模擬node2宕機,手動停止mysql服務,觀察monitor日志node2由關閉到啟動的日志如下:
[root@192.168.106.30 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log 2016/04/29 09:58:33 FATAL State of host 'db2' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK) 2016/04/29 10:00:54 FATAL State of host 'db2' changed from HARD_OFFLINE to AWAITING_RECOVERY 2016/04/29 10:01:24 FATAL State of host 'db2' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(30 seconds). It was in state AWAITING_RECOVERY for 30 seconds重新查看集群的最新狀態:
[root@node5 ~]# mmm_control showdb1(192.168.106.201) master/ONLINE. Roles: reader(192.168.106.212), writer(192.168.106.211)db2(192.168.106.202) master/HARD_OFFLINE. Roles: #從日志發現db2的狀態有ONLINE轉換為HARD_OFFLINEdb3(192.168.106.203) slave/ONLINE. Roles: reader(192.168.106.213), reader(192.168.106.215)db4(192.168.106.204) slave/ONLINE. Roles: reader(192.168.106.214) [root@node5 ~]# mmm_control checks all db4 ping [last change: 2016/04/29 09:01:08] OK db4 mysql [last change: 2016/04/29 09:01:08] OK db4 rep_threads [last change: 2016/04/29 09:01:08] OK db4 rep_backlog [last change: 2016/04/29 09:01:08] OK: Backlog is null db2 ping [last change: 2016/04/29 09:01:08] OK db2 mysql [last change: 2016/04/29 09:58:33] ERROR: Connect error (host = 192.168.106.202:3306, user = mmm_monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111 能ping通,說明只是服務死掉了。 db2 rep_threads [last change: 2016/04/29 09:01:08] OK db2 rep_backlog [last change: 2016/04/29 09:01:08] OK: Backlog is null db3 ping [last change: 2016/04/29 09:01:08] OK db3 mysql [last change: 2016/04/29 09:01:08] OK db3 rep_threads [last change: 2016/04/29 09:01:08] OK db3 rep_backlog [last change: 2016/04/29 09:01:08] OK: Backlog is null db1 ping [last change: 2016/04/29 09:01:08] OK db1 mysql [last change: 2016/04/29 09:56:33] OK db1 rep_threads [last change: 2016/04/29 09:01:08] OK db1 rep_backlog [last change: 2016/04/29 09:01:08] OK: Backlog is null [root@node5 ~]#啟動node2,node2由HARD_OFFLINE轉到AWAITING_RECOVERY狀態。這里db2再次接管讀請求。
[root@node5 ~]# mmm_control show db1(192.168.106.201) master/ONLINE. Roles: reader(192.168.106.212), writer(192.168.106.211)db2(192.168.106.202) master/ONLINE. Roles: reader(192.168.106.213)db3(192.168.106.203) slave/ONLINE. Roles: reader(192.168.106.215)db4(192.168.106.204) slave/ONLINE. Roles: reader(192.168.106.214) [root@node5 ~]#2. 模擬node1主庫宕機,查看集群狀態。
[root@node5 ~]# mmm_control showdb1(192.168.106.201) master/HARD_OFFLINE. Roles: db2(192.168.106.202) master/ONLINE. Roles: reader(192.168.106.213), writer(192.168.106.211)db3(192.168.106.203) slave/ONLINE. Roles: reader(192.168.106.212), reader(192.168.106.215)db4(192.168.106.204) slave/ONLINE. Roles: reader(192.168.106.214)node1主庫再啟動:
[root@node5 ~]# mmm_control showdb1(192.168.106.201) master/ONLINE. Roles: reader(192.168.106.212)db2(192.168.106.202) master/ONLINE. Roles: reader(192.168.106.213), writer(192.168.106.211)db3(192.168.106.203) slave/ONLINE. Roles: reader(192.168.106.215)db4(192.168.106.204) slave/ONLINE. Roles: reader(192.168.106.214)可以看到主庫啟動用會自動轉到讀的角色,不會接管主,只到現有的主再次宕機。
[root@192.168.106.30 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log 2016/04/29 10:03:25 FATAL State of host 'db1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK) 2016/04/29 10:06:20 FATAL State of host 'db1' changed from HARD_OFFLINE to AWAITING_RECOVERY 2016/04/29 10:06:51 FATAL State of host 'db1' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(30 seconds). It was in state AWAITING_RECOVERY for 31 seconds3. 測試小結:
(1)node2備選主節點宕機不影響集群的狀態,就是移除了node2備選節點的讀狀態。 (2)node1主節點宕機,由node2備選主節點接管寫角色,node3,node4指向新node2主庫進行復制,node3,node4會自動change master到node2. (3)問題來了: 如果node1主庫宕機,node2復制應用又落后于node1時就變成了主可寫狀態,這進的數據主無法保證一致性。 如果node2,node3,node4延遲于node1主,這個時node1宕機,node3,node4將會等待數據追上db1后,再重新指向新的主node2進行復制操作,這時的數據也無法保證同步的一致性。 (4)如查采用MMM高可用架構,主,主備選節點機器配置一樣,而且開啟半同步進一步提高安全性或采用MariaDB進行多線程從復制,提高復制的性能。說明:MMM(Master-Master Replication Manager for MySQL)主主復制管理器, MMM集群套件具有良好的穩定性、高可用性和可擴展性。當活動的Master節點出現故障時,備用Master節點可以立即接管,而其他的Slave節點也能自動切換到備用Master節點繼續進行同步復制,而無需人為干涉;MMM架構需要多個節點、多個IP,對服務器數量有要求,在讀寫非常繁忙的業務系統下表現不是很穩定,可能會出現復制延時、切換失效等問題。MMM方案并不太適應于對數據安全性要求很高,并且讀、寫繁忙的環境中。
=============================================== 
 一 MMM 高可用mysql簡介 
 https://blog.csdn.net/hzsunshine/article/details/67081917 
MMM(Master-Master Replication mananger for mysql),由一個管理端(monitor)和多個代理端(agent)構成。通過MMM可以實現監控和管理Mysql主主復制和服務狀態,同時也可監控多個Slave節點的復制以及運行狀態,并且可以做到任何節點發生故障時實現自動化切換的功能。 
 MMM套件三個主要腳本: 
 mmm_mond:監控進程,運行在管理節點,主要負責對所有數據庫的監控工作,同時決定和處理所有節點的角色切換。 
 mmm_agent:代理進程,運行在每臺Mysql服務器,完成監控的測試工作和執行遠程服務設置。 
 mmm_control:管理腳本,查看和管理集群運行狀態,同時管理mmm_mond進程。 
 二 MMM典型應用架構
三 MMM雙主多從Mysql架構配置 
 架構圖如上圖 
 雙主雙從應用架構讀、寫分離IP列表 
 角色 物理IP server_id 虛擬IP地址 IP角色 功能 
 Master1 192.168.106.106 1 192.168.106.200 writer IP 寫入VIP,單點寫入 
 192.168.106.201 reader IP 讀查詢VIP,每個節點一個讀VIP,可通過負載均衡軟件對讀負載均衡 
 Master2 192.168.106.107 2 192.168.106.202 
 Slave1 192.168.106.109 3 192.168.106.203 
 Slave2 192.168.106.110 4 192.168.106.204 
 Monitor 192.168.106.148 
 0.配置前準備 
 校時操作
安裝ntpdate工具
yum install ntpdate -y
使用ntpdate校時(后面的是ntp服務器)
ntpdate pool.ntp.org 
 關閉selinux 
 setenforce 0 
 sed -i ‘s/enforcing/disabled/g’ /etc/selinux/config 
 1 MMM的安裝配置 
 1.MMM套件安裝 
 1.在Monitor端安裝所有MMM組件 
 yum install epel-release.noarch -y 
 yum install mysql-mmm mysql-mmm-agent mysql-mmm-tools mysql-mmm-monitor -y 
 2.在其他所有節點安裝mysql-mmm-agent 
 yum install epel-release.noarch -y 
 yum install mysql-mmm-agent -y 
 2.Master1和Master2的主主配置和Masetr1和Slave1和Slave2的主從配置 
 (安裝配置參考《Mysql主從復制配置》《Mysql+Keepalived雙主互備高可用》的配置) 
 3.在所有MySQL節點的/etc/my.cnf中增加參數(要重啟) 
 read_only=1
read_only是因為MMM對數據需嚴格的讀寫控制
此參數不影響replication;root用戶依然可寫。
4.所有MySQL節點創建monitor user(健康檢測)和monitor agent(切換只讀模式和同步Master信息)帳號(僅在mysql寫入主節點,其他節點會自動復制) 
 grant replication client on . to ‘mmm_monitor’@’192.168.1.%’ identified by ‘monitorpasswd’; 
 grant super, replication client, process on . to ‘mmm_agent’@’192.168.1.%’ identified by ‘agentpasswd’; 
 5.在所有MMM節點配置mmm_common.conf (注意以下所有配置文件中不能以下注釋,會報錯 使用sed -i ‘/^#/d;s/#.*//g’ file 清除注釋) 
 vim /etc/mysql-mmm/mmm_common.conf 
當設置此參數,所有mysql節點都設置為”read_only=1”,MMM會根據Mysql角色來決定是否執行”set global read_only=0”.
active_master_role writer
 
 cluster_interface eno16777736 #設置網絡接口 
 pid_path /run/mysql-mmm-agent.pid #設置PID文件位置 
 bin_path /usr/libexec/mysql-mmm/ #設置MMM可執行文件路徑 
 replication_user slave_cp #設置復制的用戶名 
 replication_password pass #設置復制用戶密碼 
 agent_user mmm_agent #設置更改只讀操作用戶 
 agent_password agentpasswd #設置更改只讀操作用戶密碼 
 
 #DB1配置信息 
 ip 192.168.106.106 
 mode master 
 peer db2 #與DB1對等主機 
 
 
 ip 192.168.106.107 
 mode master 
 peer db1 
 
 
 ip 192.168.106.109 
 mode slave 
 
 
 ip 192.168.106.110 
 mode slave 
 
 #設置可執行寫用戶 
 hosts db1, db2 #DB1和DB2都可執行 
 ips 192.168.1.160 #設置可寫的VIP 
 mode exclusive #設置角色互斥模式,互斥角色只有一個IP,同一時間只能分配給一個用戶 
 
 #設置刻度角色模式 
 hosts db1, db2, db3, db4 #設置可執行主機 
 ips 192.168.106.201, 192.168.106.202, 192.168.106.203, 192.168.106.204 
 mode balanced #設置角色模式為負載均衡,這些IP動態分配多個MySQL主機 
 
 6.在僅在MMM管理節點配置mmm_mom.conf 
 vim /etc/mysql-mmm/mmm_mon.conf 
 include mmm_common.conf
 
 ip 127.0.0.1 #安全起見,只在本機監聽,默認端口9988 
 pid_path /run/mysql-mmm-monitor.pid 
 bin_path /usr/libexec/mysql-mmm 
 status_path /var/lib/mysql-mmm/mmm_mond.status 
 #測試網絡連通性,只要一個正常則網絡正常
 
 monitor_user mmm_monitor 
 monitor_password monitorpasswd 
 
debug 0 #MMM管理端運行模式 0 正常模式 1 debug模式
開啟MMM管理端的9988端口
firewall-cmd –permanent –add-port=9988/tcp
firewall-cmd –reload
開啟所有mysql節點的9989端口
firewall-cmd –permanent –add-port=9989/tcp
firewall-cmd –reload 
 7.在所有Mysql節點設置mmm_agent.conf 
 vim /etc/mysql-mmm/mmm_agent.conf 
 include mmm_common.conf 
 this db1 #在四臺mysql節點上設置對應的db,分別為db1、db2、db3、db4 
 8.啟動MMM服務
MMM管理端啟動
systemctl restart mysql-mmm-monitor
systemctl enable mysql-mmm-monitor
Mysql節點啟動
systemctl restart mysql-mmm-agent
systemctl enable mysql-mmm-agent
MMM管理端基本管理命令
mmm_control show 
 db1(192.168.106.106) master/AWAITING_RECOVERY. Roles: 
 db2(192.168.106.107) master/AWAITING_RECOVERY. Roles: 
 db3(192.168.106.109) slave/AWAITING_RECOVERY. Roles: 
 db4(192.168.106.110) slave/AWAITING_RECOVERY. Roles: 
如果一直顯示等待,可手動設置
mmm_control set_online db1 
 mmm_control set_online db2 
 mmm_control set_online db3 
 mmm_control set_online db4 
 mmm_control show
db1(192.168.106.106) master/ONLINE. Roles: reader(192.168.106.204), writer(192.168.106.200) 
 db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.201) 
 db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.203) 
 db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.202)
查看各個節點運行狀態
mmm_control checks all 
 db4 ping [last change: 2017/03/25 22:55:49] OK 
 db4 mysql [last change: 2017/03/25 22:55:49] OK 
 db4 rep_threads [last change: 2017/03/25 22:55:49] OK 
 db4 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null 
 db2 ping [last change: 2017/03/25 22:55:49] OK 
 db2 mysql [last change: 2017/03/25 22:55:49] OK 
 db2 rep_threads [last change: 2017/03/25 22:55:49] OK 
 db2 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null 
 db3 ping [last change: 2017/03/25 22:55:49] OK 
 db3 mysql [last change: 2017/03/25 22:55:49] OK 
 db3 rep_threads [last change: 2017/03/25 22:55:49] OK 
 db3 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null 
 db1 ping [last change: 2017/03/25 22:55:49] OK 
 db1 mysql [last change: 2017/03/25 22:55:49] OK 
 db1 rep_threads [last change: 2017/03/25 22:55:49] OK 
 db1 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null
查看mysql各個節點VIP綁定狀態
ip a 
 9.測試 
 1.讀寫分離測試
創建測試用戶
mysql -uroot -p 
 create database test; 
 create user test@”192.168.1.%” identified by ‘123’; 
 grant all on test.* to test@”192.168.1.%”; 
 exit
寫VIP登錄(創建表單,插入數據測試略)
mysql -utest -p -h192.168.1.160 
 use test; 
 create table mmm_test(id varchar(60));
insert into mmm_test (id) values (“masetr”); 
 exit
讀VIP登錄
mysql -utest -p -h192.168.106.201 
 select * from test.mmm_test;
+—————-+ 
 | Tables_in_test | 
 +—————-+ 
 | mmm_test | 
 +—————-+
mysql -utest -p -h192.168.106.202 
 select * from test.mmm_test;
2.故障測試 
 [root@monitor ~]# mmm_control show 
 db1(192.168.106.106) master/ONLINE. Roles: reader(192.168.106.202), writer(192.168.1.160) 
 db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.201) 
 db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.204) 
 db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.203) 
 [root@DB1 ~]# systemctl stop mariadb 
 [root@www ~]# mmm_control show 
 db1(192.168.106.106) master/HARD_OFFLINE. Roles: 
 db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.201) 
 db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.204) 
 db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.203) 
 [root@monitor ~]# mmm_control show 
 db1(192.168.106.106) master/HARD_OFFLINE. Roles: 
 db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.201), writer(192.168.1.160) 
 db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.202), reader(192.168.106.204) 
 db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.203) 
 [root@DB1 ~]# systemctl restart mariadb
注意雖然DB1復活,但寫VIP仍然在DB2不變
[root@monitor ~]# mmm_control show 
 db1(192.168.106.106) master/ONLINE. Roles: reader(192.168.106.202) 
 db2(192.168.106.107) master/ONLINE. Roles: reader(192.168.106.201), writer(192.168.1.160) 
 db3(192.168.106.109) slave/ONLINE. Roles: reader(192.168.106.204) 
 db4(192.168.106.110) slave/ONLINE. Roles: reader(192.168.106.203) 
 四 Amoeba優化MMM架構(服務器IP 192.168.1.199)
1.安裝Amoeba開發環境Java
建立安裝目錄
mkdir /usr/java 
 cd /usr/java
官網下載地址http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.rpm
wget -c –no-check-certificate –no-cookie –header “Cookie: s_nr=1420682671945; s_cc=true; oraclelicense=accept-securebackup-cookie; gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fjava%2Fjavase%2Fdownloads%2Fjdk7-downloads-1880260.html;s_sq=%5B%5BB%5D%5D” http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.rpm 
 chmod +x jdk-8u92-linux-x64.rpm 
 rpm -ivh jdk-8u92-linux-x64.rpm 
 vim /etc/profile
追加以下信息
export JAVA_HOME=/usr/java/jdk1.8.0_92 
 export CLASSPATH=.:JAVAHOME/jre/lib/rt.jar:JAVAHOME/jre/lib/rt.jar:JAVA_HOME/lib/dt.jar:JAVAHOME/lib/tools.jarexportPATH=JAVAHOME/lib/tools.jarexportPATH=PATH:$JAVA_HOME/bin
立即生效
source /etc/profile
查看版本信息
java -version 
 java version “1.8.0_92” 
 Java(TM) SE Runtime Environment (build 1.8.0_92-b14) 
 Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
2.安裝Amoeba 
 wget https://sourceforge.net/projects/amoeba/files/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip
unzip amoeba-mysql-3.0.5-RC-distribution.zip
mv amoeba-mysql-3.0.5-RC /usr/local/amoeba/ 
 3.配置Amoeba 
 vim /usr/local/amoeba/conf/dbServers.xml 
 修改以下黑體信息 
 
給所有權限(測試會用到)
grant all on . to amoeba@”192.168.1.199”; 
 flush privileges;
exit 
 5.開啟amoeba防火墻 
 firewall-cmd –permanent –add-port=8066/tcp 
 firewall-cmd –reload 
 6.啟動Amoeba 
 /usr/local/amoeba/bin/launcher & 
 netstat -tlunp | grep java 
 tcp6 0 0 :::8066 :::* LISTEN 2666/java
vim /usr/local/amoeba/jvm.properties 
 將下面內容修改成最下面 
 JVM_OPTIONS=”-server -Xms256m -Xmx1024m -Xss196k -XX:PermSize=16m -XX:MaxPermSize=96m” 
 JVM_OPTIONS=”-server -Xms1024m -Xmx1024m -Xss256k -XX:PermSize=16m -XX:MaxPermSize=96m” 
 7.驗證
在mysql進行如下操作
在slave1,2從庫執行
insert into test.mmm_test (id) values (“slave”);
開始驗證
mysql -uroot -p1234567890 -h192.168.1.199 -P8066 
 select * from test.mmm_test;
MySQL [(none)]> select * from test.mmm_test; 
 +——–+ 
 | id | 
 +——–+ 
 | masetr | 
 | slave | 
 +——–+ 
 2 rows in set (0.07 sec)
MySQL [(none)]> select * from test.mmm_test; 
 +——–+ 
 | id | 
 +——–+ 
 | masetr | 
 | slave | 
 +——–+ 
 2 rows in set (0.03 sec)
MySQL [(none)]> select * from test.mmm_test; 
 +——–+ 
 | id | 
 +——–+ 
 | masetr | 
 +——–+ 
 1 row in set (0.04 sec)
MySQL [(none)]> select * from test.mmm_test; 
 +——–+ 
 | id | 
 +——–+ 
 | masetr | 
 +——–+ 
 1 row in set (0.04 sec)
五 MySQL讀寫分離完整高可用集群架構 
 (電腦最多支持6臺虛擬機,再多就翹翹了,就不測試)
總結
以上是生活随笔為你收集整理的MySQL MMM多主多从搭建的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: 压面机压面条配方比例一斤面加多少?
- 下一篇: 分布式文档系统-document id的
