Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Database \ MongoDB 2.6 deployment replica set + partitions     - To install file manager Nautilus 3.12.2 under ubuntu (Linux)

- Oracle index visible and hidden (visible / invisible) (Database)

- Linux resource restriction level summary (Linux)

- Java class HashSet (Programming)

- Apache Kafka: the next generation of distributed messaging system (Server)

- grep command usage (Linux)

- Oracle RMAN backups of the control file backup (Database)

- CentOS 5.5 kernel upgrade installation iftop (Linux)

- Ubuntu 15.04 installation Powercommands 2.0 (Linux)

- Install RAID 6 (Striping double distributed parity) (Linux)

- ORA-38856: Unable instance UNNAMED_INSTANCE_2 (redo thread 2) marked enabled (Database)

- awk pattern matching (Programming)

- Linux Regular expressions grep and egrep (Linux)

- Distributed Firewall Design on Linux platform (Linux)

- DNF Command Tutorial (Linux)

- Text analysis tools - awk (Linux)

- The specified user to execute commands under Linux (Linux)

- Modern Objective-C syntax and new features (Programming)

- Android 5.1 OTA package compilation error (Programming)

- Simple steps allows you to build a more secure Linux server (Linux)

 
         
  MongoDB 2.6 deployment replica set + partitions
     
  Add Date : 2018-11-21      
         
       
         
  Deployment Planning

Operating System: RedHat6.4 64 Wei

 
Config

Route

1 slice

Fragment 2

3 slices

Port

28000

27017

27018

27019

27020

IP address

 
192.168.1.30

/etc/config.conf

/etc/route.conf

/etc/sd1.conf (Main)

/etc/sd2.conf (arbitration)

/etc/sd3.conf (standby)

192.168.1.52

/etc/config.conf

/etc/route.conf

/etc/sd1.conf (standby)

/etc/sd2.conf (Main)

/etc/sd3.conf (arbitration)

192.168.1.108

/etc/config.conf

/etc/route.conf

/etc/sd1.conf (arbitration)

/etc/sd2.conf (standby)

/etc/sd3.conf (Main)

First, create the following directory on the three nodes, then do the test is recommended to ensure / directory of free space around 15G

[Root @ orcl ~] # mkdir -p / var / config

[Root @ orcl ~] # mkdir -p / var / sd1

[Root @ orcl ~] # mkdir -p / var / sd2

[Root @ orcl ~] # mkdir -p / var / sd3

Second, view the profile

[Root @ orcl ~] # cat /etc/config.conf

port = 28000

dbpath = / var / config

logpath = / var / config / config.log

logappend = true

fork = true

configsvr = true

[Root @ orcl ~] # cat /etc/route.conf

port = 27017

configdb = 192.168.1.30: 28000,192.168.1.52: 28000,192.168.1.108: 28000

logpath = / var / log / mongos.log

logappend = true

fork = true

[Root @ orcl ~] # cat /etc/sd1.conf

port = 27018

dbpath = / var / sd1

logpath = / var / sd1 / shard1.log

logappend = true

shardsvr = true

replSet = set1

fork = true

[Root @ orcl ~] # cat /etc/sd2.conf

port = 27019

dbpath = / var / sd2

logpath = / var / sd2 / shard2.log

logappend = true

shardsvr = true

replSet = set2

fork = true

[Root @ orcl ~] # cat /etc/sd3.conf

port = 27020

dbpath = / var / sd3

logpath = / var / sd3 / shard1.log

logappend = true

shardsvr = true

replSet = set3

fork = true

Third, the time synchronization on the three nodes

slightly

Fourth, start with three nodes in the server config

Node 1

[Root @ orcl ~] # mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3472

child process started successfully, parent exiting

[Root @ orcl ~] # ps -ef | grep mongo

root 3472 1 1 19:15? 00:00:01 mongod -f /etc/config.conf

root 3499 2858 0 19:17 pts / 0 00:00:00 grep mongo

[Root @ orcl ~] # netstat -anltp | grep 28000

tcp 0 0 0.0.0.0:28000 0.0.0.0:* LISTEN 3472 / mongod

Node 2

[Root @ localhost ~] # mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2998

child process started successfully, parent exiting

[Root @ localhost ~] # ps -ef | grep mongo

root 2998 1 8 19:15? 00:00:08 mongod -f /etc/config.conf

root 3014 2546 0 19:17 pts / 0 00:00:00 grep mongo

[Root @ localhost ~] # netstat -anltp | grep 28000

tcp 0 0 0.0.0.0:28000 0.0.0.0:* LISTEN 2998 / mongod

Node 3

[Root @ db10g ~] # mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 4086

child process started successfully, parent exiting

[Root @ db10g ~] # ps -ef | grep mongo

root 4086 1 2 19:25? 00:00:00 mongod -f /etc/config.conf

root 4100 3786 0 19:25 pts / 0 00:00:00 grep mongo

[Root @ db10g ~] # netstat -anltp | grep 28000

tcp 0 0 0.0.0.0:28000 0.0.0.0:* LISTEN 4086 / mongod

Fifth, start with a routing server on the three nodes

Node 1

[Root @ orcl ~] # mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3575

child process started successfully, parent exiting

[Root @ orcl ~] # netstat -anltp | grep 2701

tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 3575 / mongos

Node 2

[Root @ localhost ~] # mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3057

child process started successfully, parent exiting

[Root @ localhost ~] # netstat -anltp | grep 2701

tcp 0 0 0.0.0.0:27017

Node 3

[Root @ db10g ~] # mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 4108

child process started successfully, parent exiting

[Root @ db10g ~] # netstat -anltp | grep 27017

tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 4108 / mongos

Sixth, in three nodes enable shard

mongod -f /etc/sd1.conf

mongod -f /etc/sd2.conf

mongod -f /etc/sd3.conf

Node 1

[Root @ orcl ~] # ps -ef | grep mongo

root 3472 1 2 19:15? 00:02:18 mongod -f /etc/config.conf

root 3575 1 0 19:28? 00:00:48 mongos -f /etc/route.conf

root 4135 1 0 20:52? 00:00:07 mongod -f /etc/sd1.conf

root 4205 1 0 20:55? 00:00:05 mongod -f /etc/sd2.conf

root 4265 1 0 20:58? 00:00:04 mongod -f /etc/sd3.conf

Node 2

[Root @ localhost ~] # ps -ef | grep mongo

root 2998 1 1 19:15? 00:02:02 mongod -f /etc/config.conf

root 3057 1 1 19:28? 00:01:02 mongos -f /etc/route.conf

root 3277 1 1 20:52? 00:00:20 mongod -f /etc/sd1.conf

root 3334 1 6 20:56? 00:00:52 mongod -f /etc/sd2.conf

root 3470 1 1 21:01? 00:00:07 mongod -f /etc/sd3.conf

Node 3

[Root @ db10g data] # ps -ef | grep mongo

root 4086 1 1 19:25? 00:01:58 mongod -f /etc/config.conf

root 4108 1 0 19:27? 00:00:55 mongos -f /etc/route.conf

root 4592 1 0 20:54? 00:00:07 mongod -f /etc/sd1.conf

root 4646 1 3 20:56? 00:00:30 mongod -f /etc/sd2.conf

root 4763 1 4 21:04? 00:00:12 mongod -f /etc/sd3.conf

Seven replica set configuration

192.168.1.30

[Root @ orcl ~] # mongo --port 27018

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27018/test

> Use admin

switched to db admin

> Rs1 = {_ id: "set1", members: [{_ id: 0, host: "192.168.1.30:27018", priority: 2}, {_ id: 1, host: "192.168.1.52:27018"}, { _id: 2, host: "192.168.1.108:27018", arbiterOnly: true}]}

{

        "_id": "Set1",

        "Members": [

                {

                        "_id": 0,

                        "Host": "192.168.1.30:27018",

                        "Priority": 2

                },

                {

                        "_id": 1,

                        "Host": "192.168.1.52:27018"

                },

                {

                        "_id": 2,

                        "Host": "192.168.1.108:27018",

                        "ArbiterOnly": true

                }

        ]

}

> Rs.initiate (rs1)

{

        "Info": "Config now saved locally Should come online in about a minute..",

        "Ok": 1

}

192.168.1.52

[Root @ orcl ~] # mongo --port 27019

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27019/test

> Use admin

switched to db admin

> Rs2 = {_ id: "set2", members: [{_ id: 0, host: "192.168.1.52:27019", priority: 2}, {_ id: 1, host: "192.168.1.108:27019"}, { _id: 2, host: "192.168.1.30:27019", arbiterOnly: true}]}

{

        "_id": "Set2",

        "Members": [

                {

                        "_id": 0,

                        "Host": "192.168.1.52:27019",

                        "Priority": 2

                },

                {

                        "_id": 1,

                        "Host": "192.168.1.108:27019"

                },

                {

                        "_id": 2,

                        "Host": "192.168.1.30:27019",

                        "ArbiterOnly": true

                }

        ]

}

> Rs.initiate (rs2);

{

        "Info": "Config now saved locally Should come online in about a minute..",

        "Ok": 1

}

192.168.1.108

[Root @ localhost sd3] # mongo --port 27020

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27020/test

> Use admin

switched to db admin

> Rs3 = {_ id: "set3", members: [{_ id: 0, host: "192.168.1.108:27020", priority: 2}, {_ id: 1, host: "192.168.1.30:27020"}, { _id: 2, host: "192.168.1.52:27020", arbiterOnly: true}]}

{

        "_id": "Set3",

        "Members": [

                {

                        "_id": 0,

                        "Host": "192.168.1.108:27020",

                        "Priority": 2

                },

                {

                        "_id": 1,

                        "Host": "192.168.1.30:27020"

                },

                {

                        "_id": 2,

                        "Host": "192.168.1.52:27020",

                        "ArbiterOnly": true

                }

        ]

}

> Rs.initiate (rs3);

{

        "Info": "Config now saved locally Should come online in about a minute..",

        "Ok": 1

}

Eight, add slices

In the three nodes office node can operate

192.168.1.30

[Root @ orcl sd3] # mongo --port 27017

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27017/test

mongos> use admin

switched to db admin

mongos> db.runCommand ({addshard: "set1 / 192.168.1.30: 27018,192.168.1.52: 27018,192.168.1.108: 27018"})

{ "ShardAdded": "set1", "ok": 1}

mongos> db.runCommand ({addshard: "set2 / 192.168.1.30: 27019,192.168.1.52: 27019,192.168.1.108: 27019"})

{ "ShardAdded": "set2", "ok": 1}

mongos> db.runCommand ({addshard: "set3 / 192.168.1.30: 27020,192.168.1.52: 27020,192.168.1.108: 27020"})

{ "ShardAdded": "set3", "ok": 1}

IX view fragmentation information

mongos> db.runCommand ({listshards: 1})

{

        "Shards": [

                {

                        "_id": "Set1",

                        "Host": "set1 / 192.168.1.30: 27018,192.168.1.52: 27018"

                },

                {

                        "_id": "Set2",

                        "Host": "set2 / 192.168.1.108: 27019,192.168.1.52: 27019"

                },

                {

                        "_id": "Set3",

                        "Host": "set3 / 192.168.1.108: 27020,192.168.1.30: 27020"

                }

        ],

        "Ok": 1

}

Ten, remove fragmentation

mongos> db.runCommand ({removeshard: "set3"})

{

        "Msg": "draining started successfully",

        "State": "started",

        "Shard": "set3",

        "Ok": 1

}

XI fragmented management

mongos> use config

switched to db config

mongos> db.shards.find ();

{ "_id": "Set1", "host": "set1 / 192.168.1.30: 27018,192.168.1.52: 27018"}

{ "_id": "Set2", "host": "set2 / 192.168.1.108: 27019,192.168.1.52: 27019"}

{ "_id": "Set3", "host": "set3 / 192.168.1.108: 27020,192.168.1.30: 27020"}

Twelve, I want to slice library and table declaration

Switch to library admin

mongos> use admin

Statement test library allows fragmentation

mongos> db.runCommand ({enablesharding: "test"})

{ "Ok": 1}

Users declaration table for a slice

mongos> db.runCommand ({shardcollection: "test.lineqi", key: {id: "hashed"}})

{ "Collectionsharded": "test.lineqi", "ok": 1}

Ten, test scripts

Switch to the test

mongos> use test

mongos> for (var i = 1; i <= 100000; i ++) db.lineqi.save ({id: i, name: "12345678", sex: "male", age: 27, value: "test"}) ;

WriteResult ({ "nInserted": 1})

Fourth, the test results

View fragmentation information

mongos> use config

switched to db config

mongos> db.chunks.find ();

{ "_id": "Test.users-id_MinKey", "lastmod": Timestamp (2, 0), "lastmodEpoch": ObjectId ( "55ddb3a70f613da70e8ce303"), "ns": "test.users", "min": { "id": { "$ minKey": 1}}, "max": { "id": 1}, "shard": "set1"}

{ "_id": "Test.users-id_1.0", "lastmod": Timestamp (3, 1), "lastmodEpoch": ObjectId ( "55ddb3a70f613da70e8ce303"), "ns": "test.users", "min" : { "id": 1}, "max": { "id": 4752}, "shard": "set2"}

{ "_id": "Test.users-id_4752.0", "lastmod": Timestamp (3, 0), "lastmodEpoch": ObjectId ( "55ddb3a70f613da70e8ce303"), "ns": "test.users", "min" : { "id": 4752}, "max": { "id": { "$ maxKey": 1}}, "shard": "set3"}

{ "_id": "Test.lineqi-id_MinKey", "lastmod": Timestamp (3, 2), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": { "id": { "$ minKey": 1}}, "max": { "id": NumberLong ( "- 6148914691236517204")}, "shard": "set2"}

{ "_id": "Test.lineqi-id_-3074457345618258602", "lastmod": Timestamp (3, 4), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min" : { "id": NumberLong ( "- 3074457345618258602")}, "max": { "id": NumberLong (0)}, "shard": "set3"}

{ "_id": "Test.lineqi-id_3074457345618258602", "lastmod": Timestamp (3, 6), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": { "id": NumberLong ( "3074457345618258602")}, "max": { "id": NumberLong ( "6148914691236517204")}, "shard": "set1"}

{ "_id": "Test.lineqi-id_-6148914691236517204", "lastmod": Timestamp (3, 3), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min" : { "id": NumberLong ( "- 6148914691236517204")}, "max": { "id": NumberLong ( "- 3074457345618258602")}, "shard": "set2"}

{ "_id": "Test.lineqi-id_0", "lastmod": Timestamp (3, 5), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": { "id": NumberLong (0)}, "max": { "id": NumberLong ( "3074457345618258602")}, "shard": "set3"}

{ "_id": "Test.lineqi-id_6148914691236517204", "lastmod": Timestamp (3, 7), "lastmodEpoch": ObjectId ( "55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": { "id": NumberLong ( "6148914691236517204")}, "max": { "id": { "$ maxKey": 1}}, "shard": "set1"}

See table stores information users

mongos> use test

mongos> db.lineqi.stats ();

{

        "Sharded": true,

        "SystemFlags": 1,

        "UserFlags": 1,

        "Ns": "test.lineqi",

        "Count": 100000,

        "NumExtents": 18,

        "Size": 11200000,

        "StorageSize": 33546240,

        "TotalIndexSize": 8086064,

        "IndexSizes": {

                "_id_": 3262224,

                "Id_hashed": 4823840

        },

        "AvgObjSize": 112,

        "Nindexes": 2,

        "Nchunks": 6,

        "Shards": {

                "Set1": {

                        "Ns": "test.lineqi",

                        "Count": 33102,

                        "Size": 3707424,

                        "AvgObjSize": 112,

                        "StorageSize": 11182080,

                        "NumExtents": 6,

                        "Nindexes": 2,

                        "LastExtentSize": 8388608,

                        "PaddingFactor": 1,

                        "SystemFlags": 1,

                        "UserFlags": 1,

                        "TotalIndexSize": 2649024,

                        "IndexSizes": {

                                "_id_": 1079232,

                                "Id_hashed": 1569792

                        },

                        "Ok": 1

                },

                "Set2": {

                        "Ns": "test.lineqi",

                        "Count": 33755,

                        "Size": 3780560,

                        "AvgObjSize": 112,

                        "StorageSize": 11182080,

                        "NumExtents": 6,

                        "Nindexes": 2,

                        "LastExtentSize": 8388608,

                        "PaddingFactor": 1,

                        "SystemFlags": 1,

                        "UserFlags": 1,

                        "TotalIndexSize": 2755312,

                        "IndexSizes": {

                                "_id_": 1103760,

                                "Id_hashed": 1651552

                        },

                        "Ok": 1

                },

                "Set3": {

                        "Ns": "test.lineqi",

                        "Count": 33143,

                        "Size": 3712016,

                        "AvgObjSize": 112,

                        "StorageSize": 11182080,

                        "NumExtents": 6,

                        "Nindexes": 2,

                        "LastExtentSize": 8388608,

                        "PaddingFactor": 1,

                        "SystemFlags": 1,

                        "UserFlags": 1,

                        "TotalIndexSize": 2681728,

                        "IndexSizes": {

                                "_id_": 1079232,

                                "Id_hashed": 1602496

                        },

                        "Ok": 1

                }

        },

        "Ok": 1

}
     
         
       
         
  More:      
 
- shell script: the number of characters in the text to print no more than 6 words (Programming)
- To share Linux script automatically change passwords (Linux)
- Share useful bash aliases and functions (Linux)
- How to statistical data of various size Redis (Database)
- Change CentOS 7 NIC name eno16777736 to eth0 (Linux)
- Grep how to find files based on file contents in UNIX (Linux)
- MySQL migration tool used in the production environment (Database)
- Android main thread message system (Handler Looper) (Linux)
- Under CentOS using yum command to install the Task Scheduler crontab (Linux)
- Java memory-mapped file MappedByteBuffer (Programming)
- Common data structures and functions of Linux process scheduling (Programming)
- Linux, grep, sed usage (Linux)
- Python type way of comparison (Programming)
- Linux security settings Notes (Linux)
- Oracle 11g maintenance partitions (Seven) - Modifying Real Attributes of Partitions (Database)
- Help you enhance Python programming languages 27 (Programming)
- Linux find command usage practices (Linux)
- Use the DBMS_SCHEDULER package to manage scheduled tasks (Database)
- Nginx1.8 version upgrade method AMH4.2 Free manually compile (Server)
- Resolve the host via KVM console can not connect clients (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.