|
MongoDB's replication configuration than MySQL simple, but few feel more intelligent.
Configuration is very simple, briefly explain the environment:
Primary A
A Secondary
Arbiter A
Three machines respectively, by a subnet of 10.10.1.0 linked.
Were added to the mongo.conf configuration file for each machine configuration is as follows:
replSet = rs0
Can not be named rs0, also named other, anyway, as each machine repSet OK.
Login mongodb with mongo shell, then create a cfg BSON format variable as follows:
cfg = {
"_id": "Rs0",
"Members":
[{ "_id": 0, "host": "10.10.1.55:27017"}, { "_id": 1, "host": "10.10.1.56:27017"}]
}
10.10.1.55 is Primary machine, 1.56 is secondary, this without adding Arbiter first.
Continue to enter the following name (you can just type in the Primary)
rs.initiate (cfg)
rs.status ()
After the above two commands you'll find a machine is Primary, one is Secondary.
Now enter the following command creates Arbiter:
rs.addArb ( "10.10.1.57")
Enter rs.statsu again (), there should be the following:
rs0: PRIMARY> rs.status ()
{
"Set": "rs0",
"Date": ISODate ( "2015-10-12T17: 01: 31Z"),
"MyState": 1,
"Members": [
{
"_id": 0,
"Name": "10.10.1.55:27017",
"Health": 1,
"State": 1,
"StateStr": "PRIMARY",
"Uptime": 66,
"Optime": Timestamp (1444636144, 1),
"OptimeDate": ISODate ( "2015-10-12T07: 49: 04Z"),
"ElectionTime": Timestamp (1444669275, 1),
"ElectionDate": ISODate ( "2015-10-12T17: 01: 15Z"),
"Self": true
},
{
"_id": 1,
"Name": "10.10.1.56:27017",
"Health": 1,
"State": 2,
"StateStr": "SECONDARY",
"Uptime": 21,
"Optime": Timestamp (1444636144, 1),
"OptimeDate": ISODate ( "2015-10-12T07: 49: 04Z"),
"LastHeartbeat": ISODate ( "2015-10-12T17: 01: 30Z"),
"LastHeartbeatRecv": ISODate ( "2015-10-12T17: 01: 30Z"),
"PingMs": 105,
"LastHeartbeatMessage": "syncing to: 10.10.1.55:27017",
"SyncingTo": "10.10.1.55:27017"
},
{
"_id": 2,
"Name": "10.10.1.57:27017",
"Health": 1,
"State": 7,
"StateStr": "ARBITER",
"Uptime": 2,
"LastHeartbeat": ISODate ( "2015-10-12T17: 01: 29Z"),
"LastHeartbeatRecv": ISODate ( "2015-10-12T17: 01: 30Z"),
"PingMs": 1002
}
],
"Ok": 1
}
Such MongoDB Replication on the configuration, and then see if you can achieve replication.
On the Primary Input:
rs0: PRIMARY> use students;
switched to db students
rs0: PRIMARY> db.scores.insert ({ "stuid": 1, "subject": "math", "score": 99})
WriteResult ({ "nInserted": 1})
rs0: PRIMARY> use local
switched to db local
rs0: PRIMARY> db.oplog.rs.find ()
{ "Ts": Timestamp (1444636024, 1), "h": NumberLong (0), "v": 2, "op": "n", "ns": "", "o": { "msg" : "initiating set"}}
{ "Ts": Timestamp (1444636144, 1), "h": NumberLong ( "- 5201159559565017071"), "v": 2, "op": "n", "ns": "", "o": { "msg": "Reconfig set", "version": 2}}
{ "Ts": Timestamp (1444669489, 1), "h": NumberLong ( "- 3625785754623372300"), "v": 2, "op": "i", "ns": "students.scores", "o ": {" _id ": ObjectId (" 561be83187dabed86388bff7 ")," stuid ": 1," subject ":" math "," score ": 99}}
The last one, showing just inserted data replication through Secondary oplog past, and this time you can log Secondary check whether the database and data section:
rs.slaveOk ();
use students
db.scores.find ()
So there will be just the piece of data.
Now simulate the situation Primary hang the machine, enter the system shell:
ps -ef | grep mongo
kill -9 mongodPID
Kill process mongodb
Enter rs.status in the original secondary () you'll find the secondary becomes primary. Restart hang primary will become the new secondary. Available on the new primary server input: rs.setpDown () to re-elect the original primary.
If Secondary hang up, whether it can be done automatically copied after reboot?
Now with a kill to kill secondary process, after inserting a number of records in the primary.
rs0: PRIMARY> for (var i = 2; i < 1000; i ++) db.scores.insert ({ "stuid": i, "subject": "math", "score": 99})
rs0: PRIMARY> db.scores.find () count ().
999
A total of 999 lines. Now restart the secondary database, enter:
rs.slaveOk ()
rs0: SECONDARY> db.scores.find () count ().
999
Like Primary |
|
|
|