mongodb复制集部署
部署復制集
由三個節點組成的 復制集 為網絡故障或是其他的系統故障提供了足夠的冗余。該復制集也有足夠的分布式讀操作的能力。復制集應該保持奇數個節點,這也就保證了 選舉 可以正常的進行
用3臺已有的 mongod 實例來部署一個由三個節點組成的 復制集
192.168.1.3 hadoop1.abc.com hadoop1
192.168.1.4 hadoop2.abc.com hadoop2
192.168.1.5 hadoop3.abc.com hadoop3
?
?
部署復制集的注意事項
架構
在生產環境中,我們應該將每個節點部署在獨立的機器上,并使用標準的MongoDB端口 27017 。使用 bind_ip 參數來限制訪問MongoDB的應用程序的地址。
若使用了異地分布式架構的復制集,請確保多數 mongod 實例節點位于主數據中心中。
?
連通性
確保各個節點之間可以正常通訊,且各個客戶端都處于安全的可信的網絡環境中。可以考慮以下事項:
- 建立虛擬的專用網絡。確保各個節點之間的流量是在本地網絡范圍內路由的。(Establish a virtual private network. Ensure that your network topology routes all traffic between members within a single site over the local area network.) 
- 配置連接限制來防止未知的客戶端連接到復制集。 
- 配置網絡設置和防火墻規則來對將MongoDB的端口僅開放給應用程序,來讓應用程序發的進出數據包可以與MongoDB正常交流。 
最后請確保復制集各節點可以互相通過DNS或是主機名解析。我們需要配置DNS域名或是設置 /etc/hosts 文件來配置。
?
這里實驗,是關閉防火墻,并把selinux設置成setenforce 0
系統環境如下:
[root@hadoop2?data]#?cat?/etc/issue CentOS?release?6.5?(Final) Kernel?\r?on?an?\m [root@hadoop2?data]#?uname?-r 2.6.32-431.el6.x86_64?
配置文件選項:
?
port = 27017
bind_ip =
dbpath =
fore =? true
replSet? =? testrs0
rest = true
詳細步驟
1、建立每個節點都建立據據目錄
[root@hadoop1?~]#?mkidr?-pv?/mongodb/data/ [root@hadoop1?~]#?chown?mongod.mongod?/mongodb/data/2
將復制集中的每個節點以適當的配置參數啟動。
在每個節點上啟動 mongod 并通過制定 replSet 參數來指定其復制集名,并可以指定其他需要的參數
[root@hadoop1?~]#?vim?/etc/mongod.conf //添加如下 #Replica?Set replSet?=?testrs0 或者 [root@hadoop1?~]#?mongod?--replSet?"testrs0"?
確保每個節點都有相同復制集名稱
[root@hadoop1 ~]# scp /etc/mongod.conf root@hadoop2:/etc/;scp /etc/mongod.conf root@hadoop2:/etc/;
?
注意了,如果解決啟動mongod 時,出現addr already in use錯誤,原因啟動端口被占用
[root@hadoop1?data]#?mongod 2015-07-29T19:15:51.728+0800?E?NETWORK??[initandlisten]?listen():?bind()?failed?errno:98?Address?already?in?use?for?socket:?0.0.0.0:27017 2015-07-29T19:15:51.728+0800?E?NETWORK??[initandlisten]???addr?already?in?use 2015-07-29T19:15:51.729+0800?I?STORAGE??[initandlisten]?exception?in?initAndListen:?29?Data?directory?/data/db?not?found.,?terminating 2015-07-29T19:15:51.729+0800?I?CONTROL??[initandlisten]?dbexit:??rc:?100?
把端口找出來,kill掉
[root@hadoop1?~]#?netstat?-anp|more unix??2??????[?ACC?]?????STREAM?????LISTENING?????15588??2174/mongod?????????/tmp/mongodb-27017.sock?
?
[root@hadoop1?~]#?kill?2174 [root@hadoop1?~]#?/etc/init.d/mongod?start Starting?mongod:???????????????????????????????????????????[確定][root@hadoop1 ~]# mongo
4、初始化復制集。
//使用rs.initiate()命令,MongoDB將初始化一個由當前節點構成、擁有默認配置的復制集。
>?rs.initiate() {"info2"?:?"no?configuration?explicitly?specified?--?making?one","me"?:?"hadoop1.abc.com:27017","info"?:?"try?querying?local.system.replset?to?see?current?configuration","ok"?:?0,"errmsg"?:?"already?initialized","code"?:?23 } >?rs.status() {"state"?:?10,"stateStr"?:?"REMOVED","uptime"?:?38,"optime"?:?Timestamp(1438168698,?1),"optimeDate"?:?ISODate("2015-07-29T11:18:18Z"),"ok"?:?0,"errmsg"?:?"Our?replica?set?config?is?invalid?or?we?are?not?a?member?of?it","code"?:?93 }查看日志 文件
2015-07-29T20:00:45.433+0800 W NETWORK? [ReplicationExecutor] Failed to connect to 192.168.1.3:27017, reason: errno:111 Connection refused
2015-07-29T20:00:45.433+0800 W REPL???? [ReplicationExecutor] Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat; Got "NodeNotFound No host described in new configuration 1 for replica set testrs0 maps to this node" while validating { _id: "testrs0", version: 1, members: [ { _id: 0, host: "hadoop1.abc.com:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2015-07-29T20:00:45.433+0800 I REPL???? [ReplicationExecutor] New replica set config in use: { _id: "testrs0", version: 1, members: [ { _id: 0, host: "hadoop1.abc.com:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2015-07-29T20:00:45.433+0800 I REPL???? [ReplicationExecutor] This node is not a member of the config
2015-07-29T20:00:45.433+0800 I REPL???? [ReplicationExecutor] transition to REMOVED
2015-07-29T20:00:45.433+0800 I REPL???? [ReplicationExecutor] Starting replication applier threads
2015-07-29T20:00:49.067+0800 I NETWORK? [initandlisten] connection accepted from 127.0.0.1:58852 #1 (1 connection now open)
2015-07-29T20:01:17.436+0800 I COMMAND? [conn1] replSet info initiate : no configuration specified.? Using a default configuration for the set
2015-07-29T20:01:17.436+0800 I COMMAND? [conn1] replSet created this configuration for initiation : { _id: "testrs0", version: 1, members: [ { _id: 0, host: "hadoop1.abc.com:27017" } ] }
2015-07-29T20:01:17.436+0800 I REPL???? [conn1] replSetInitiate admin command received from client
5、將其他的節點加入復制集。
通過 rs.add() 來將剩下的節點加入復制集。
?
testrs0:PRIMARY>?rs.add("192.168.1.4:27017") {?"ok"?:?1?} testrs0:PRIMARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T02:09:45.871Z"),"myState"?:?1,"members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?50410,"optime"?:?Timestamp(1438222179,?1),"optimeDate"?:?ISODate("2015-07-30T02:09:39Z"),"electionTime"?:?Timestamp(1438171776,?1),"electionDate"?:?ISODate("2015-07-29T12:09:36Z"),"configVersion"?:?2,"self"?:?true},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?6,"optime"?:?Timestamp(1438222179,?1),"optimeDate"?:?ISODate("2015-07-30T02:09:39Z"),"lastHeartbeat"?:?ISODate("2015-07-30T02:09:45.081Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T02:09:45.183Z"),"pingMs"?:?1,"configVersion"?:?2}],"ok"?:?1 }testrs0:PRIMARY>?rs.add("192.168.1.5:27017") {?"ok"?:?1?}? testrs0:PRIMARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T02:28:52.382Z"),"myState"?:?1,"members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?51557,"optime"?:?Timestamp(1438223187,?1),"optimeDate"?:?ISODate("2015-07-30T02:26:27Z"),"electionTime"?:?Timestamp(1438171776,?1),"electionDate"?:?ISODate("2015-07-29T12:09:36Z"),"configVersion"?:?3,"self"?:?true},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?1153,"optime"?:?Timestamp(1438223187,?1),"optimeDate"?:?ISODate("2015-07-30T02:26:27Z"),"lastHeartbeat"?:?ISODate("2015-07-30T02:28:52.337Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T02:28:50.438Z"),"pingMs"?:?0,"syncingTo"?:?"hadoop1.abc.com:27017","configVersion"?:?3},{"_id"?:?2,"name"?:?"192.168.1.5:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?13,"optime"?:?Timestamp(1438223187,?1),"optimeDate"?:?ISODate("2015-07-30T02:26:27Z"),"lastHeartbeat"?:?ISODate("2015-07-30T02:28:50.437Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T02:28:50.478Z"),"pingMs"?:?1,"configVersion"?:?3}],"ok"?:?1 } testrs0:PRIMARY>?rs.isMaster() {"setName"?:?"testrs0","setVersion"?:?3,"ismaster"?:?true,"secondary"?:?false,"hosts"?:?["hadoop1.abc.com:27017","192.168.1.4:27017","192.168.1.5:27017"],"primary"?:?"hadoop1.abc.com:27017","me"?:?"hadoop1.abc.com:27017","electionId"?:?ObjectId("55b8c280790a6c1f967f6147"),"maxBsonObjectSize"?:?16777216,"maxMessageSizeBytes"?:?48000000,"maxWriteBatchSize"?:?1000,"localTime"?:?ISODate("2015-07-30T02:30:18Z"),"maxWireVersion"?:?3,"minWireVersion"?:?0,"ok"?:?1 }其他節點hadoop2驗證一下
testrs0:SECONDARY>?rs.isMaster() {"setName"?:?"testrs0","setVersion"?:?3,"ismaster"?:?false,"secondary"?:?true,"hosts"?:?["hadoop1.abc.com:27017","192.168.1.4:27017","192.168.1.5:27017"],"primary"?:?"hadoop1.abc.com:27017","me"?:?"192.168.1.4:27017","maxBsonObjectSize"?:?16777216,"maxMessageSizeBytes"?:?48000000,"maxWriteBatchSize"?:?1000,"localTime"?:?ISODate("2015-07-30T02:32:43.546Z"),"maxWireVersion"?:?3,"minWireVersion"?:?0,"ok"?:?1 }6、在主節點上創建數據,從節點是否獲取到
testrs0:PRIMARY>?use?testdb switched?to?db?testdb? testrs0:PRIMARY>?db.testcoll.insert({Name:?"test",Age:?50,Gender:?"F"}) WriteResult({?"nInserted"?:?1?}) testrs0:PRIMARY>?db.testcoll.find() {?"_id"?:?ObjectId("55b9945b92ad0ab98483695e"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b994ce92ad0ab98483695f"),?"Name"?:?"test",?"Age"?:?50,?"Gender"?:?"F"?}? 在從節點上查詢,是不可以直接查詢,要使用一個命令rs.slave()把自己提升為從節點
testrs0:SECONDARY>?rs.slaveOk() testrs0:SECONDARY>?use?testdb; switched?to?db?testdb testrs0:SECONDARY>?db.testcoll.find() {?"_id"?:?ObjectId("55b9945b92ad0ab98483695e"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b994ce92ad0ab98483695f"),?"Name"?:?"test",?"Age"?:?50,?"Gender"?:?"F"?}?
7、讓主節點hadoop1掛掉
[root@hadoop1?data]#?service?mongod?stop Stopping?mongod:?在hadoop3驗證一下
testrs0:PRIMARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T04:36:19.677Z"),"myState"?:?1,"members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?0,"state"?:?8,"stateStr"?:?"(not?reachable/healthy)","uptime"?:?0,"optime"?:?Timestamp(0,?0),"optimeDate"?:?ISODate("1970-01-01T00:00:00Z"),"lastHeartbeat"?:?ISODate("2015-07-30T04:36:19.503Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T04:33:18.147Z"),"pingMs"?:?0,"lastHeartbeatMessage"?:?"Failed?attempt?to?connect?to?hadoop1.abc.com:27017;?couldn't?connect?to?server?hadoop1.abc.com:27017?(192.168.1.3),?connection?attempt?failed","configVersion"?:?-1},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?7661,"optime"?:?Timestamp(1438225614,?1),"optimeDate"?:?ISODate("2015-07-30T03:06:54Z"),"lastHeartbeat"?:?ISODate("2015-07-30T04:36:19.335Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T04:36:19.348Z"),"pingMs"?:?0,"configVersion"?:?3},{"_id"?:?2,"name"?:?"192.168.1.5:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?7664,"optime"?:?Timestamp(1438225614,?1),"optimeDate"?:?ISODate("2015-07-30T03:06:54Z"),"electionTime"?:?Timestamp(1438230801,?1),"electionDate"?:?ISODate("2015-07-30T04:33:21Z"),"configVersion"?:?3,"self"?:?true}],"ok"?:?1 } testrs0:PRIMARY>?db.isMaster() {"setName"?:?"testrs0","setVersion"?:?3,"ismaster"?:?true,"secondary"?:?false,"hosts"?:?["hadoop1.abc.com:27017","192.168.1.4:27017","192.168.1.5:27017"],"primary"?:?"192.168.1.5:27017","me"?:?"192.168.1.5:27017","electionId"?:?ObjectId("55b9a91100e446910c89a0a3"),"maxBsonObjectSize"?:?16777216,"maxMessageSizeBytes"?:?48000000,"maxWriteBatchSize"?:?1000,"localTime"?:?ISODate("2015-07-30T04:37:33.090Z"),"maxWireVersion"?:?3,"minWireVersion"?:?0,"ok"?:?1 } testrs0:PRIMARY>?db.testcoll.insert?({Name:?"tom",Age:?45,Gender:?"G"}) WriteResult({?"nInserted"?:?1?})?
回到hadoop2查看一下數據
?testrs0:SECONDARY>?db.testcoll.find() {?"_id"?:?ObjectId("55b9942d92ad0ab98483695c"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9944892ad0ab98483695d"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9945b92ad0ab98483695e"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b994ce92ad0ab98483695f"),?"Name"?:?"test",?"Age"?:?50,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9aa714b575261aff42f25"),?"Name"?:?"tom",?"Age"?:?45,?"Gender"?:?"G"?}?
讓hadoop1上線
[root@hadoop1?data]#?service?mongod?start Starting?mongod:?
再驗證一下狀態,但不能奪回主動權,除非讓它優先級高一點。
這種場景下,它自動成為從的了
?testrs0:SECONDARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T04:44:16.534Z"),"myState"?:?2,"syncingTo"?:?"192.168.1.4:27017","members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?165,"optime"?:?Timestamp(1438231153,?1),"optimeDate"?:?ISODate("2015-07-30T04:39:13Z"),"syncingTo"?:?"192.168.1.4:27017","configVersion"?:?3,"self"?:?true},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?164,"optime"?:?Timestamp(1438231153,?1),"optimeDate"?:?ISODate("2015-07-30T04:39:13Z"),"lastHeartbeat"?:?ISODate("2015-07-30T04:44:16.199Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T04:44:15.824Z"),"pingMs"?:?0,"syncingTo"?:?"192.168.1.5:27017","configVersion"?:?3},{"_id"?:?2,"name"?:?"192.168.1.5:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?164,"optime"?:?Timestamp(1438231153,?1),"optimeDate"?:?ISODate("2015-07-30T04:39:13Z"),"lastHeartbeat"?:?ISODate("2015-07-30T04:44:16.185Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T04:44:14.902Z"),"pingMs"?:?0,"electionTime"?:?Timestamp(1438230801,?1),"electionDate"?:?ISODate("2015-07-30T04:33:21Z"),"configVersion"?:?3}],"ok"?:?1 } testrs0:SECONDARY>?rs.slaveOk() testrs0:SECONDARY>?db.testcoll.find() testrs0:SECONDARY>?use?testdb switched?to?db?testdb testrs0:SECONDARY>?db.testcoll.find() {?"_id"?:?ObjectId("55b9942d92ad0ab98483695c"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9944892ad0ab98483695d"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9945b92ad0ab98483695e"),?"Name"?:?"test",?"Age"?:?60,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b994ce92ad0ab98483695f"),?"Name"?:?"test",?"Age"?:?50,?"Gender"?:?"F"?} {?"_id"?:?ObjectId("55b9aa714b575261aff42f25"),?"Name"?:?"tom",?"Age"?:?45,?"Gender"?:?"G"?}?
7、定義優先級
?
使用 rs.conf() 來查看 復制集配置對象 :
testrs0:SECONDARY>?rs.conf() {"_id"?:?"testrs0","version"?:?3,"members"?:?[{"_id"?:?0,"host"?:?"hadoop1.abc.com:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?1,"host"?:?"192.168.1.4:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?2,"host"?:?"192.168.1.5:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1}],"settings"?:?{"chainingAllowed"?:?true,"heartbeatTimeoutSecs"?:?10,"getLastErrorModes"?:?{},"getLastErrorDefaults"?:?{"w"?:?1,"wtimeout"?:?0}} }?
在hadoop3的主節點把hadoop1的優先級定義為2,讓它成為主節點
將復制集配置對象復制給一個變量(如?mycfg )。然后通過該變量對該節點設置優先級。然后通過 rs.reconfig() 來更新復制集配置。
注意,設置優先級用這個命令mycfg.members[數組中第幾個節點].priority ?=? 2
testrs0:PRIMARY>?rs.conf() {"_id"?:?"testrs0","version"?:?3,"members"?:?[{"_id"?:?0,"host"?:?"hadoop1.abc.com:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?1,"host"?:?"192.168.1.4:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?2,"host"?:?"192.168.1.5:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1}],"settings"?:?{"chainingAllowed"?:?true,"heartbeatTimeoutSecs"?:?10,"getLastErrorModes"?:?{},"getLastErrorDefaults"?:?{"w"?:?1,"wtimeout"?:?0}} } testrs0:PRIMARY>?mycfg=rs.conf() {"_id"?:?"testrs0","version"?:?3,"members"?:?[{"_id"?:?0,"host"?:?"hadoop1.abc.com:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?1,"host"?:?"192.168.1.4:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1},{"_id"?:?2,"host"?:?"192.168.1.5:27017","arbiterOnly"?:?false,"buildIndexes"?:?true,"hidden"?:?false,"priority"?:?1,"tags"?:?{},"slaveDelay"?:?0,"votes"?:?1}],"settings"?:?{"chainingAllowed"?:?true,"heartbeatTimeoutSecs"?:?10,"getLastErrorModes"?:?{},"getLastErrorDefaults"?:?{"w"?:?1,"wtimeout"?:?0}} } testrs0:PRIMARY>?mycfg.members[0].priority?=?2 2 testrs0:PRIMARY>?rs.reconfig(mycfg) {?"ok"?:?1?} testrs0:PRIMARY>? 2015-07-30T14:34:44.437+0800?I?NETWORK??DBClientCursor::init?call()?failed 2015-07-30T14:34:44.439+0800?I?NETWORK??trying?reconnect?to?127.0.0.1:27017?(127.0.0.1)?failed 2015-07-30T14:34:44.452+0800?I?NETWORK??reconnect?127.0.0.1:27017?(127.0.0.1)?ok testrs0:SECONDARY>?
在hadoop1上查看,并停掉hadoop1的服務
?testrs0:PRIMARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T06:51:11.952Z"),"myState"?:?1,"members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?7780,"optime"?:?Timestamp(1438238074,?1),"optimeDate"?:?ISODate("2015-07-30T06:34:34Z"),"electionTime"?:?Timestamp(1438238079,?1),"electionDate"?:?ISODate("2015-07-30T06:34:39Z"),"configVersion"?:?4,"self"?:?true},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?7780,"optime"?:?Timestamp(1438238074,?1),"optimeDate"?:?ISODate("2015-07-30T06:34:34Z"),"lastHeartbeat"?:?ISODate("2015-07-30T06:51:11.072Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T06:51:11.375Z"),"pingMs"?:?0,"configVersion"?:?4},{"_id"?:?2,"name"?:?"192.168.1.5:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?7780,"optime"?:?Timestamp(1438238074,?1),"optimeDate"?:?ISODate("2015-07-30T06:34:34Z"),"lastHeartbeat"?:?ISODate("2015-07-30T06:51:11.779Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T06:51:10.299Z"),"pingMs"?:?0,"configVersion"?:?4}],"ok"?:?1 } testrs0:PRIMARY>?quit() You?have?new?mail?in?/var/spool/mail/root [root@hadoop1?~]#?service?mongo?stop mongo:?未被識別的服務 [root@hadoop1?~]#?service?mongod?stop Stopping?mongod:?
在hadoop3查看,兩個節點還是會自動選舉主節點的
testrs0:PRIMARY>?rs.status() {"set"?:?"testrs0","date"?:?ISODate("2015-07-30T06:55:18.238Z"),"myState"?:?1,"members"?:?[{"_id"?:?0,"name"?:?"hadoop1.abc.com:27017","health"?:?0,"state"?:?8,"stateStr"?:?"(not?reachable/healthy)","uptime"?:?0,"optime"?:?Timestamp(0,?0),"optimeDate"?:?ISODate("1970-01-01T00:00:00Z"),"lastHeartbeat"?:?ISODate("2015-07-30T06:55:16.275Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T06:51:44.879Z"),"pingMs"?:?5,"lastHeartbeatMessage"?:?"Failed?attempt?to?connect?to?hadoop1.abc.com:27017;?couldn't?connect?to?server?hadoop1.abc.com:27017?(192.168.1.3),?connection?attempt?failed","configVersion"?:?-1},{"_id"?:?1,"name"?:?"192.168.1.4:27017","health"?:?1,"state"?:?2,"stateStr"?:?"SECONDARY","uptime"?:?16000,"optime"?:?Timestamp(1438238074,?1),"optimeDate"?:?ISODate("2015-07-30T06:34:34Z"),"lastHeartbeat"?:?ISODate("2015-07-30T06:55:17.988Z"),"lastHeartbeatRecv"?:?ISODate("2015-07-30T06:55:17.988Z"),"pingMs"?:?1,"lastHeartbeatMessage"?:?"could?not?find?member?to?sync?from","configVersion"?:?4},{"_id"?:?2,"name"?:?"192.168.1.5:27017","health"?:?1,"state"?:?1,"stateStr"?:?"PRIMARY","uptime"?:?16003,"optime"?:?Timestamp(1438238074,?1),"optimeDate"?:?ISODate("2015-07-30T06:34:34Z"),"electionTime"?:?Timestamp(1438239108,?1),"electionDate"?:?ISODate("2015-07-30T06:51:48Z"),"configVersion"?:?4,"self"?:?true}],"ok"?:?1 }?
把hadoop1服務啟動,因為優先級設為2,故推舉它為主節點
testrs0:PRIMARY>?rs.i rs.initiate(??rs.isMaster( testrs0:PRIMARY>?rs.isMaster() {"setName"?:?"testrs0","setVersion"?:?4,"ismaster"?:?true,"secondary"?:?false,"hosts"?:?["hadoop1.abc.com:27017","192.168.1.4:27017","192.168.1.5:27017"],"primary"?:?"hadoop1.abc.com:27017","me"?:?"hadoop1.abc.com:27017","electionId"?:?ObjectId("55b9ca84ddeeac6a93355c18"),"maxBsonObjectSize"?:?16777216,"maxMessageSizeBytes"?:?48000000,"maxWriteBatchSize"?:?1000,"localTime"?:?ISODate("2015-07-30T06:56:28.472Z"),"maxWireVersion"?:?3,"minWireVersion"?:?0,"ok"?:?1 }?
8、觸發重新選舉
???????優先級為0的節點,無權觸發;僅參與選舉
????????使用的命令rs.addArb()
?
9、多端口復制集
應用程序需要連接多個復制集,那么每個復制集需要有不同的名字
?
?1)為每個節點建立必要的數據文件夾:
[root@hadoop1?~]#?mkdir?-pv?/srv/mongodb/rs0-0?/srv/mongodb/rs0-1?/srv/mongodb/rs0-2 mkdir:?已創建目錄?"/srv/mongodb" mkdir:?已創建目錄?"/srv/mongodb/rs0-0" mkdir:?已創建目錄?"/srv/mongodb/rs0-1" mkdir:?已創建目錄?"/srv/mongodb/rs0-2" You?have?new?mail?in?/var/spool/mail/root2)啟動mongod實例
第一個節點
[root@hadoop1?rs0-0]#?mongod?--port?27018?--dbpath?/srv/mongodb/rs0-0?--replSet?rs0?--smallfiles?--oplogSize?128?&第二個節點
[root@hadoop1?~]#?mongod?--port?27019?--dbpath?/srv/mongodb/rs0-1?--replSet?rs0?--smallfiles?--oplogSize?128?&第三個節點
?[root@hadoop1?~]#?mongod?--port?27020?--dbpath?/srv/mongodb/rs0-2?--replSet?rs0?--smallfiles?--oplogSize?128?&?
驗證
[root@hadoop1?~]#?netstat?-tlnp Active?Internet?connections?(only?servers) Proto?Recv-Q?Send-Q?Local?Address???????????????Foreign?Address?????????????State???????PID/Program?name??? tcp????????0??????0?0.0.0.0:27019???????????????0.0.0.0:*???????????????????LISTEN??????15718/mongod???????? tcp????????0??????0?0.0.0.0:27020???????????????0.0.0.0:*???????????????????LISTEN??????15785/mongod???????? tcp????????0??????0?0.0.0.0:111?????????????????0.0.0.0:*???????????????????LISTEN??????1081/rpcbind???????? tcp????????0??????0?0.0.0.0:28017???????????????0.0.0.0:*???????????????????LISTEN??????14221/mongod???????? tcp????????0??????0?0.0.0.0:22??????????????????0.0.0.0:*???????????????????LISTEN??????1285/sshd??????????? tcp????????0??????0?127.0.0.1:631???????????????0.0.0.0:*???????????????????LISTEN??????1157/cupsd?????????? tcp????????0??????0?127.0.0.1:25????????????????0.0.0.0:*???????????????????LISTEN??????1361/master????????? tcp????????0??????0?0.0.0.0:44378???????????????0.0.0.0:*???????????????????LISTEN??????1099/rpc.statd?????? tcp????????0??????0?0.0.0.0:27017???????????????0.0.0.0:*???????????????????LISTEN??????14221/mongod???????? tcp????????0??????0?0.0.0.0:27018???????????????0.0.0.0:*???????????????????LISTEN??????15640/mongod???????? tcp????????0??????0?:::111??????????????????????:::*????????????????????????LISTEN??????1081/rpcbind???????? tcp????????0??????0?:::22???????????????????????:::*????????????????????????LISTEN??????1285/sshd??????????? tcp????????0??????0?::1:631?????????????????????:::*????????????????????????LISTEN??????1157/cupsd?????????? tcp????????0??????0?::1:25??????????????????????:::*????????????????????????LISTEN??????1361/master????????? tcp????????0??????0?:::48510????????????????????:::*????????????????????????LISTEN??????1099/rpc.statd?
3)指明所需連接的端口,來通過 mongo 命令連接到某個 mongod 實例
[root@hadoop1?mongodb]#?mongo?--port?27018 MongoDB?shell?version:?3.0.5 connecting?to:?127.0.0.1:27018/test 2015-07-30T16:26:01.442+0800?I?NETWORK??[initandlisten]?connection?accepted?from?127.0.0.1:54185?#1?(1?connection?now?open) Server?has?startup?warnings:? 2015-07-30T16:19:14.667+0800?I?CONTROL??[initandlisten]?**?WARNING:?You?are?running?this?process?as?the?root?user,?which?is?not?recommended. 2015-07-30T16:19:14.667+0800?I?CONTROL??[initandlisten]? 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]? 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]?**?WARNING:?/sys/kernel/mm/transparent_hugepage/enabled?is?'always'. 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]?**????????We?suggest?setting?it?to?'never' 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]? 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]?**?WARNING:?/sys/kernel/mm/transparent_hugepage/defrag?is?'always'. 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]?**????????We?suggest?setting?it?to?'never' 2015-07-30T16:19:14.668+0800?I?CONTROL??[initandlisten]? >?
最后關于
部署一個由四個節點組成的異地分布復制集。
由四個節點組成的異地分布復制集有以下兩個需要注意的:
- 一個節點(如``hadoop4.abc.net``) 必須是一個 arbiter。這個節點可以在服務的任何機器上運行,或者在其他的MongoDB機器上運行。 
- 我們需要決定如何分配節點。下列有3種架構類型: - 在大多數情況下,由于第一種架構的易用性,我們更推薦第一種架構。 - 3個節點在Site A中,一個 優先級為0的節點 在Site B中,與此同時Site A中還要有個投票節點。 
- 兩個節點在Site A,兩個 優先級為0的節點 在Site B ,同時一個投票節點在Site A。 
- 兩個節點在Site A,一個優先級為0的節點在Site B,一個優先級為0的節點在Site C,同時一個投票節點在Site A。 
 
?
在大多數情況下,由于第一種架構的易用性,更推薦第一種架構。
?
?
復制集參考
?
?復制集數據庫命令
轉載于:https://blog.51cto.com/zouqingyun/1679771
超強干貨來襲 云風專訪:近40年碼齡,通宵達旦的技術人生總結
以上是生活随笔為你收集整理的mongodb复制集部署的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: Mac下crontab -e没结果的解决
- 下一篇: 关于python进行批量数据备份及部署
