|
When the replica set during deployment we'll add one or more nodes of arbitration, the arbitration node is not used for backing up data, since its mandate responsibilities is responsible for electing the master node, so not too high demands on the hardware, it can be deployed on a separate server on the server may be a server monitor can also be deployed on a virtual machine, but one thing must not arbitration node backup data. Arbitration node points and annotations can participate in the elections, the election of each object is a non-voting member, is the need to back up data from the node.
It reminds me of a previous understanding had zookeeper cluster election program, it is different and MongoDB.
ZooKeeper uses election algorithm called the Leader election. In the process of running the entire cluster, only one Leader, the other is Follower, if ZooKeeper cluster running Leader is a problem, the system will use the algorithm to re-elect a Leader. Thus, between the various nodes connected to each other to be able to guarantee, you must configure the mapping described above.
ZooKeeper cluster startup time, will first elect a Leader, the Leader election process, to meet a certain node election count can become Leader.
For memcached does not provide a distributed approach, we can use a proxy server to implement a distributed deployment. And Magent memcached is a proxy server, but it does not exist any leader, secondary, all commands are magent entrance proxy server, when a node machine when done, but could not find the requested data, it will be from the backup node Obtain. When we make available through the zookeeper to manage memcached distributed clusters.
For after redis when set up slave servers, slave and master establishes a connection, then send sync commands. Whether it is to re-establish synchronization even connected for the first time or after the connection is disconnected, master will start a background process to save database snapshot to a file, and master the main process will begin collecting new write command and cached. After the completion of background process to write the file, master file is sent to the slave, slave to save the file to disk, and then loaded into memory to restore the database snapshot on slave. Then master will put forward the cache command to the slave. And the subsequent write command received will be sent to the master slave began to build the connection. Command from master to slave synchronization of data and commands sent from the client to use the same protocol format. slave can be automatically re-establish the connection when the master and the slave disconnected. If you receive more than one slave master simultaneously sent to the synchronous connection commands will use to start a process to write database mirroring, and then sent to all slave. For when the master fails, we can restore persistent data, but it fails to salve and master will then reconnect all written within the slave from the snapshot seen this recovery must be very slow. |
|
|
|