$ mongod
2019-03-13T14:03:02.496+0000 I CONTROL [initandlisten] MongoDB starting : pid=2530 port=27017 dbpath=/data/db 64-bit host=m103
2019-03-13T14:03:02.497+0000 I CONTROL [initandlisten] db version v3.6.11
2019-03-13T14:03:02.497+0000 I CONTROL [initandlisten] git version: b4339db12bf57ffee5b84a95c6919dbd35fe31c9
...
2019-03-13T14:03:03.493+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-03-13T14:03:03.493+0000 I NETWORK [initandlisten] waiting for connections on port 27017
基本集群管理
mongod & mongo
mongod
命令用来启动一个 MongoDB Server 后台进程,mongo
是一个 Shell 入口用来连接 MongoDB Server 执行一些数据库管理操作。
基本操作
本部分说明 mongod
和 mongo
命令的一些基本使用操作。
如上输出可以显示:
-
MongoDB 启动成功
-
进程 ID 为 2530
-
监听的端口为 27017
-
dbpath 为 /data/db
-
MongoDB 的版本为 v3.6.11
$ netstat -ntulop 2350
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN - off (0.00/0/0)
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - off (0.00/0/0)
tcp 0 0 0.0.0.0:43671 0.0.0.0:* LISTEN - off (0.00/0/0)
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2530/mongod off (0.00/0/0)
tcp6 0 0 :::111 :::* LISTEN - off (0.00/0/0)
tcp6 0 0 :::22 :::* LISTEN - off (0.00/0/0)
tcp6 0 0 :::45403 :::* LISTEN - off (0.00/0/0)
udp 0 0 0.0.0.0:111 0.0.0.0:* - off (0.00/0/0)
udp 0 0 0.0.0.0:51421 0.0.0.0:* - off (0.00/0/0)
udp 0 0 0.0.0.0:7926 0.0.0.0:* - off (0.00/0/0)
udp 0 0 0.0.0.0:775 0.0.0.0:* - off (0.00/0/0)
udp 0 0 127.0.0.1:904 0.0.0.0:* - off (0.00/0/0)
udp 0 0 0.0.0.0:68 0.0.0.0:* - off (0.00/0/0)
udp6 0 0 :::111 :::* - off (0.00/0/0)
udp6 0 0 :::52488 :::* - off (0.00/0/0)
udp6 0 0 :::14840 :::* - off (0.00/0/0)
udp6 0 0 :::775 :::* - off (0.00/0/0)
$ mongo
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("26452ac6-5ba0-4a35-826f-2c5fcf491742") }
MongoDB server version: 3.6.11
MongoDB Enterprise >
2019-03-13T14:09:12.012+0000 I NETWORK [listener] connection accepted from 127.0.0.1:46344 #1 (1 connection now open)
2019-03-13T14:09:12.012+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:46344 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.11" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "14.04" } }
MongoDB Enterprise > show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.shutdownServer()
server should be down...
2019-03-13T14:13:25.764+0000 I COMMAND [conn1] terminating, shutdown command received
2019-03-13T14:13:25.764+0000 I NETWORK [conn1] shutdown: going to close listening sockets...
2019-03-13T14:13:25.764+0000 I NETWORK [conn1] removing socket file: /tmp/mongodb-27017.sock
2019-03-13T14:13:25.765+0000 I FTDC [conn1] Shutting down full-time diagnostic data capture
2019-03-13T14:13:25.766+0000 I STORAGE [conn1] WiredTigerKVEngine shutting down
2019-03-13T14:13:25.912+0000 I STORAGE [conn1] shutdown: removing fs lock...
2019-03-13T14:13:25.912+0000 I CONTROL [conn1] now exiting
2019-03-13T14:13:25.913+0000 I CONTROL [conn1] shutting down with code:0
MongoDB Enterprise > exit
bye
mongod 启动指定参数
本部分说明 mongod
启动 MongoDB 数据库时指定相应参数。
$ mongod -h
...
--port arg specify port number - 27017 by default
--dbpath arg directory for datafiles - defaults to
/data/db
--logpath arg log file to send write to instead of
stdout - has to be a file, not
directory
--fork fork server process
$ mkdir first_mongod
$ mongod --port 30000 --dbpath first_mongod/ --logpath first_mongod/mongod01.log --fork
about to fork child process, waiting until server is ready for connections.
forked process: 2750
child process started successfully, parent exiting
$ ps -aux | grep mongo*
vagrant 2750 0.8 2.5 1105028 53100 ? Sl 14:25 0:00 mongod --port 30000 --dbpath first_mongod/ --logpath first_mongod/mongod01.log --fork
$ netstat -ntulop | grep 2750
tcp 0 0 127.0.0.1:30000 0.0.0.0:* LISTEN 2750/mongod off (0.00/0/0)
$ mongo --port 30000
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:30000/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("db4aa0de-5309-401a-bd64-1f60466a5acf") }
MongoDB server version: 3.6.11
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.shutdownServer()
server should be down...
MongoDB Enterprise > exit
bye
绑定多个地址,创建用户
本部分说明 mongod
启动 MongoDB 数据库时邦定多个 IP,并通过 mongo
命令创建一个管理账户。
$ mongod --port 27000 --dbpath /data/db/ --bind_ip '192.168.103.100,localhost'
$ ps -ef | grep mongod
vagrant 2547 1959 7 23:35 pts/0 00:00:00 mongod --port 27000 --dbpath /data/db/ --bind_ip 192.168.103.100,localhost
$ netstat -antulop | grep 2547
tcp 0 0 127.0.0.1:27000 0.0.0.0:* LISTEN 2547/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27000 0.0.0.0:* LISTEN 2547/mongod off (0.00/0/0)
$ mongo admin --host localhost:27000 --eval '
db.createUser({
user: "kylin",
pwd: "mongodb",
roles: [
{role: "root", db: "admin"}
]
})
'
$ mongo kylin --host localhost:27000
MongoDB shell version v3.6.11
connecting to: mongodb://localhost:27000/kylin?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3b10edf4-5d3a-4831-a505-787298cdae34") }
MongoDB server version: 3.6.11
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.shutdownServer()
server should be down...
MongoDB Enterprise > exit
bye
通过配制文件启动 MongoDB
本部通过一个配制文件指定 mongod
启动时所需要的参数。
storage:
dbPath: /data/db/
net:
port: 27000
bindIp: localhost,192.168.103.100
security:
authorization: enabled
$ mongod --config my-mongod.conf
$ ps -ef | grep mongod
vagrant 2699 1959 0 23:48 pts/0 00:00:01 mongod --config my-mongod.conf
$ netstat -antulop | grep 2699
tcp 0 0 192.168.103.100:27000 0.0.0.0:* LISTEN 2699/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27000 0.0.0.0:* LISTEN 2699/mongod off (0.00/0/0
$ kill -9 2699
改变默认的数据存储路径
本部分说明在 mongod
启动时指定一个额外的路径。
$ sudo mkdir -p /var/mongodb/db/
$ sudo chown vagrant:vagrant /var/mongodb/db/
$ ls -l /var/mongodb/
total 4
drwxr-xr-x 2 vagrant vagrant 4096 Mar 14 00:10 db
storage:
dbPath: /var/mongodb/db/
net:
port: 27000
bindIp: localhost,192.168.103.100
security:
authorization: enabled
$ mongod --config my-mongod.conf
$ ps -ef | grep mongod
vagrant 3257 1959 1 00:17 pts/0 00:00:00 mongod --config my-mongod.conf
$ netstat -antulop | grep 3257
tcp 0 0 192.168.103.100:27000 0.0.0.0:* LISTEN 3257/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27000 0.0.0.0:* LISTEN 3257/mongod off (0.00/0/0)
$ ls -l /var/mongodb/db/
total 196
-rw------- 1 vagrant vagrant 45 Mar 14 00:17 WiredTiger
-rw------- 1 vagrant vagrant 21 Mar 14 00:17 WiredTiger.lock
-rw------- 1 vagrant vagrant 1103 Mar 14 00:19 WiredTiger.turtle
-rw------- 1 vagrant vagrant 57344 Mar 14 00:19 WiredTiger.wt
-rw------- 1 vagrant vagrant 4096 Mar 14 00:17 WiredTigerLAS.wt
-rw------- 1 vagrant vagrant 16384 Mar 14 00:18 _mdb_catalog.wt
-rw------- 1 vagrant vagrant 16384 Mar 14 00:18 collection-0--7654468380997166951.wt
-rw------- 1 vagrant vagrant 16384 Mar 14 00:18 collection-2--7654468380997166951.wt
-rw------- 1 vagrant vagrant 4096 Mar 14 00:17 collection-4--7654468380997166951.wt
drwx------ 2 vagrant vagrant 4096 Mar 14 00:20 diagnostic.data
-rw------- 1 vagrant vagrant 16384 Mar 14 00:18 index-1--7654468380997166951.wt
-rw------- 1 vagrant vagrant 16384 Mar 14 00:18 index-3--7654468380997166951.wt
-rw------- 1 vagrant vagrant 4096 Mar 14 00:17 index-5--7654468380997166951.wt
-rw------- 1 vagrant vagrant 4096 Mar 14 00:18 index-6--7654468380997166951.wt
drwx------ 2 vagrant vagrant 4096 Mar 14 00:17 journal
-rw------- 1 vagrant vagrant 5 Mar 14 00:17 mongod.lock
-rw------- 1 vagrant vagrant 16384 Mar 14 00:19 sizeStorer.wt
-rw------- 1 vagrant vagrant 114 Mar 14 00:17 storage.bson
$ mongo admin --port 27000
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27000/admin?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("bf41ace1-63a6-4da1-af9f-c93882fdbcda") }
MongoDB server version: 3.6.11
MongoDB Enterprise >
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.shutdownServer()
server should be down...
MongoDB Enterprise > exit
bye
设定日志策略
本部分设计日志策略,将查询时间大于 50 毫秒的操作日志输出。
storage:
dbPath: /var/mongodb/db/
systemLog:
destination: file
logAppend: true
path: /var/mongodb/db/mongod.log
net:
port: 27000
bindIp: localhost,192.168.103.100
processManagement:
fork: true
operationProfiling:
slowOpThresholdMs: 50
security:
authorization: enabled
$ mongod --config my-mongod.conf
//
//
执行计划分析
MongoDB 中如果要分析某些执行操作的性能,如执行时间等,就需要执行计划 Profiler
,本部分说明 MongoDB 执行计划分析。
MongoDB Enterprise > use newDB
switched to db newDB
MongoDB Enterprise > db.getProfilingLevel()
0
MongoDB Enterprise > db.setProfilingLevel(1)
{ "was" : 0, "slowms" : 100, "sampleRate" : 1, "ok" : 1 }
MongoDB Enterprise > show collections
system.profile
MongoDB Enterprise > db.setProfilingLevel(1, {slowms: 0})
{ "was" : 1, "slowms" : 100, "sampleRate" : 1, "ok" : 1 }
MongoDB Enterprise > db.new_connection.insert({"id": 1001, "name": "Kylin"})
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise > db.system.profile.find().pretty()
{
"op" : "insert",
"ns" : "newDB.new_connection",
"command" : {
"insert" : "new_connection",
"ordered" : true,
"lsid" : {
"id" : UUID("a5f34116-7269-4372-ab7c-67a3254a1afe")
},
"$db" : "newDB"
},
"ninserted" : 1,
"keysInserted" : 1,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(5),
"w" : NumberLong(3)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(2),
"W" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(2)
}
}
},
"responseLength" : 29,
"protocol" : "op_msg",
"millis" : 60,
"ts" : ISODate("2019-03-14T09:37:47.393Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
MongoDB Enterprise > db.new_connection.find({"id": 1001})
{ "_id" : ObjectId("5c8a20eb29d0caf9229a8d82"), "id" : 1001, "name" : "Kylin" }
MongoDB Enterprise > db.system.profile.find().pretty()
...
{
"op" : "query",
"ns" : "newDB.new_connection",
"command" : {
"find" : "new_connection",
"filter" : {
"id" : 1001
},
"lsid" : {
"id" : UUID("a5f34116-7269-4372-ab7c-67a3254a1afe")
},
"$db" : "newDB"
},
"keysExamined" : 0,
"docsExamined" : 1,
"cursorExhausted" : true,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"nreturned" : 1,
"responseLength" : 146,
"protocol" : "op_msg",
"millis" : 0,
"planSummary" : "COLLSCAN",
"execStats" : {
"stage" : "COLLSCAN",
"filter" : {
"id" : {
"$eq" : 1001
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 3,
"advanced" : 1,
"needTime" : 1,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 1
},
"ts" : ISODate("2019-03-14T09:43:54.961Z"),
...
创建管理用户
$ mongod -f /etc/mongod.conf
$ ps -ef | grep mongod
vagrant 5191 1956 5 14:52 pts/0 00:00:00 mongod -f /etc/mongod.conf
$ netstat -antulop | grep 5191
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 5191/mongod off (0.00/0/0
$ mongo --host 127.0.0.1:27017
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d34d9ea7-369a-4466-865a-833556a63a3f") }
MongoDB server version: 3.6.11
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.createUser({user: "root", pwd: "root123", roles: ["root"]})
Successfully added user: { "user" : "root", "roles" : [ "root" ] }
$ mongo --username root --password root123 --authenticationDatabase admin
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("eb8549e7-025c-4d89-94ec-e42096526967") }
MongoDB server version: 3.6.11
MongoDB Enterprise > db.stats()
{
"db" : "test",
"collections" : 0,
"views" : 0,
"objects" : 0,
"avgObjSize" : 0,
"dataSize" : 0,
"storageSize" : 0,
"numExtents" : 0,
"indexes" : 0,
"indexSize" : 0,
"fileSize" : 0,
"fsUsedSize" : 0,
"fsTotalSize" : 0,
"ok" : 1
}
MongoDB Enterprise > exit
bye
创建应用用户
storage:
dbPath: /var/mongodb/db/
systemLog:
destination: file
logAppend: true
path: /var/mongodb/db/mongod.log
net:
port: 27000
bindIp: localhost,192.168.103.100
processManagement:
fork: true
security:
authorization: enabled
$ mongod -f test-mongod.conf
forked process: 5405
$ netstat -antulop | grep 5405
tcp 0 0 192.168.103.100:27000 0.0.0.0:* LISTEN 5405/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27000 0.0.0.0:* LISTEN 5405/mongod off (0.00/0/0)
$ mongo --host 127.0.0.1:27000
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27000/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("dd7a993a-9b0d-4ad5-a802-b92d7127a1d0") }
MongoDB server version: 3.6.11
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.createUser({user: "m103-admin", pwd: "m103-pass", roles: ["root"]})
Successfully added user: { "user" : "m103-admin", "roles" : [ "root" ] }
MongoDB Enterprise > exit
bye
$ mongo admin --host 127.0.0.1:27000 -u m103-admin -p m103-pass
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27000/admin?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("e903a74b-fb15-4f3d-a295-8af6d72f7af2") }
MongoDB server version: 3.6.11
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.createUser({user: "m103-application-user", pwd: "m103-application-pass", roles: [{db: "applicationData", role: "readWrite"}]})
Successfully added user: {
"user" : "m103-application-user",
"roles" : [
{
"db" : "applicationData",
"role" : "readWrite"
}
]
}
MongoDB Enterprise > show users
{
"_id" : "admin.m103-admin",
"user" : "m103-admin",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
{
"_id" : "admin.m103-application-user",
"user" : "m103-application-user",
"db" : "admin",
"roles" : [
{
"role" : "readWrite",
"db" : "applicationData"
}
]
}
$ mongo applicationData --host 127.0.0.1:27000 -u m103-application-user -p m103-application-pass --authenticationDatabase admin
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27000/applicationData?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("2731cd1d-0137-4c29-a771-6d6657387465") }
MongoDB server version: 3.6.11
MongoDB Enterprise > db.inventory.insertMany([{ item: "journal", qty: 25, status: "A", size: { h: 14, w: 21, uom: "cm" }, tags: [ "blank", "red" ] }, { item: "notebook", qty: 50, status: "A", size: { h: 8.5, w: 11, uom: "in" }, tags: [ "red", "blank" ] }]);
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("5c8d2518d2fe64d546a47c9e"),
ObjectId("5c8d2518d2fe64d546a47c9f")
]
}
MongoDB Enterprise > db.inventory.find({})
{ "_id" : ObjectId("5c8d2518d2fe64d546a47c9e"), "item" : "journal", "qty" : 25, "status" : "A", "size" : { "h" : 14, "w" : 21, "uom" : "cm" }, "tags" : [ "blank", "red" ] }
{ "_id" : ObjectId("5c8d2518d2fe64d546a47c9f"), "item" : "notebook", "qty" : 50, "status" : "A", "size" : { "h" : 8.5, "w" : 11, "uom" : "in" }, "tags" : [ "red", "blank" ] }
MongoDB Enterprise > exit
bye
批量导入数据
本部分使用 创建应用用户
批量导入数据。
$ ls -l products.json
-rw-rw-r-- 1 vagrant vagrant 92216793 Mar 15 05:34 products.json
$ mongoimport --db applicationData --port 27000 --username m103-application-user --password m103-application-pass --authenticationDatabase admin --file products.json
2019-03-16T16:19:11.249+0000 no collection specified
2019-03-16T16:19:11.249+0000 using filename 'products' as collection
2019-03-16T16:19:11.262+0000 connected to: localhost:27000
2019-03-16T16:19:14.252+0000 [#####...................] applicationData.products 20.4MB/87.9MB (23.2%)
2019-03-16T16:19:17.252+0000 [###########.............] applicationData.products 40.6MB/87.9MB (46.2%)
2019-03-16T16:19:20.255+0000 [################........] applicationData.products 59.9MB/87.9MB (68.1%)
2019-03-16T16:19:23.251+0000 [#####################...] applicationData.products 79.8MB/87.9MB (90.8%)
2019-03-16T16:19:24.451+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-03-16T16:19:24.451+0000 imported 516784 documents
MongoDB Enterprise > db.products.count()
516784
复制(Replication)管理
3 节点集群复制配制
$ sudo mkdir -p /var/mongodb/pki
$ sudo chown vagrant:vagrant -R /var/mongodb
$ openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
$ chmod 600 /var/mongodb/pki/m103-keyfile
2. 创建三个节点配制文件,内容如下
mongod-repl-1.conf | mongod-repl-2.conf |
---|---|
|
|
mongod-repl-3.conf | |
---|---|
|
$ mkdir -p /var/mongodb/db/{1,2,3}
$ mongod -f mongod-repl-1.conf
$ mongod -f mongod-repl-2.conf
$ mongod -f mongod-repl-3.conf
$ ps -ef | grep mongod
vagrant 2155 1 0 07:29 ? 00:00:00 mongod -f mongod-repl-1.conf
vagrant 2194 1 0 07:30 ? 00:00:00 mongod -f mongod-repl-2.conf
vagrant 2232 1 0 07:31 ? 00:00:00 mongod -f mongod-repl-3.conf
$ for i in 2155 2194 2232 ; do netstat -antulop | grep $i; done
tcp 0 0 127.0.0.1:27001 0.0.0.0:* LISTEN 2155/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27001 0.0.0.0:* LISTEN 2155/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27002 0.0.0.0:* LISTEN 2194/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27002 0.0.0.0:* LISTEN 2194/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27003 0.0.0.0:* LISTEN 2232/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27003 0.0.0.0:* LISTEN 2232/mongod off (0.00/0/0)
$ mongo --port 27001
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b5bd64d4-fec1-4002-b078-c4465e1fd966") }
MongoDB server version: 3.6.11
MongoDB Enterprise > rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.103.100:27001",
"ok" : 1
}
MongoDB Enterprise m103-repl:SECONDARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2019-03-18T07:40:20.648Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1552894818, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1552894818, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1552894818, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1552894818, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 640,
"optime" : {
"ts" : Timestamp(1552894818, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-03-18T07:40:18Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1552894756, 2),
"electionDate" : ISODate("2019-03-18T07:39:16Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"operationTime" : Timestamp(1552894818, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552894818, 1),
"signature" : {
"hash" : BinData(0,"b2Owp1OlR6reFIFTnG9/4e02+Tw="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:PRIMARY> db.createUser({user: "m103-admin", pwd: "m103-pass", roles: [{role: "root", db: "admin"}]})
Successfully added user: {
"user" : "m103-admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
$ mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB shell version v3.6.11
connecting to: mongodb://192.168.103.100:27001/?authSource=admin&gssapiServiceName=mongodb&replicaSet=m103-repl
2019-03-18T07:47:47.621+0000 I NETWORK [thread1] Starting new replica set monitor for m103-repl/192.168.103.100:27001
2019-03-18T07:47:47.622+0000 I NETWORK [thread1] Successfully connected to 192.168.103.100:27001 (1 connections now open to 192.168.103.100:27001 with a 5 second timeout)
Implicit session: session { "id" : UUID("b1ea59d3-b36f-4a84-bf96-3739d1a620e9") }
MongoDB server version: 3.6.11
MongoDB Enterprise m103-repl:PRIMARY> rs.add("192.168.103.100:27002")
{
"ok" : 1,
"operationTime" : Timestamp(1552895444, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552895444, 1),
"signature" : {
"hash" : BinData(0,"/fYb24lG+07P1vFJbWlrave4/wg="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.add("192.168.103.100:27003")
{
"ok" : 1,
"operationTime" : Timestamp(1552895447, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552895447, 1),
"signature" : {
"hash" : BinData(0,"3qY1jjhSv+hsOWXvMPDFrHFOeic="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2019-03-18T07:52:04.922Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1344,
"optime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-03-18T07:51:58Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1552894756, 2),
"electionDate" : ISODate("2019-03-18T07:39:16Z"),
"configVersion" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 80,
"optime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-03-18T07:51:58Z"),
"optimeDurableDate" : ISODate("2019-03-18T07:51:58Z"),
"lastHeartbeat" : ISODate("2019-03-18T07:52:03.064Z"),
"lastHeartbeatRecv" : ISODate("2019-03-18T07:52:03.607Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.103.100:27001",
"syncSourceHost" : "192.168.103.100:27001",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.103.100:27003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 77,
"optime" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1552895518, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-03-18T07:51:58Z"),
"optimeDurableDate" : ISODate("2019-03-18T07:51:58Z"),
"lastHeartbeat" : ISODate("2019-03-18T07:52:03.064Z"),
"lastHeartbeatRecv" : ISODate("2019-03-18T07:52:03.012Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.103.100:27002",
"syncSourceHost" : "192.168.103.100:27002",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 3
}
],
"ok" : 1,
"operationTime" : Timestamp(1552895518, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552895518, 1),
"signature" : {
"hash" : BinData(0,"0GmanYs4kEgfT36dh6aE7p5BSeI="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
$ for i in 2155 2194 2232 ; do netstat -antulop | grep $i; echo ;done
tcp 0 0 127.0.0.1:27001 0.0.0.0:* LISTEN 2155/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27001 0.0.0.0:* LISTEN 2155/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50635 ESTABLISHED 2155/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:41951 192.168.103.100:27002 ESTABLISHED 2155/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:41195 192.168.103.100:27003 ESTABLISHED 2155/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50632 ESTABLISHED 2155/mongod keepalive (183.92/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50631 ESTABLISHED 2155/mongod keepalive (225.90/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50643 ESTABLISHED 2155/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50638 ESTABLISHED 2155/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50658 ESTABLISHED 2155/mongod keepalive (123.50/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50650 ESTABLISHED 2155/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:27001 192.168.103.100:50659 ESTABLISHED 2155/mongod keepalive (123.50/0/0)
tcp 0 0 127.0.0.1:27002 0.0.0.0:* LISTEN 2194/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27002 0.0.0.0:* LISTEN 2194/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41966 ESTABLISHED 2194/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41951 ESTABLISHED 2194/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41973 ESTABLISHED 2194/mongod keepalive (123.50/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41969 ESTABLISHED 2194/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41970 ESTABLISHED 2194/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:50638 192.168.103.100:27001 ESTABLISHED 2194/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:27002 192.168.103.100:41972 ESTABLISHED 2194/mongod keepalive (117.35/0/0)
tcp 0 0 192.168.103.100:50635 192.168.103.100:27001 ESTABLISHED 2194/mongod keepalive (103.02/0/0)
tcp 0 0 192.168.103.100:50650 192.168.103.100:27001 ESTABLISHED 2194/mongod keepalive (106.09/0/0)
tcp 0 0 192.168.103.100:41201 192.168.103.100:27003 ESTABLISHED 2194/mongod keepalive (106.09/0/0)
tcp 0 0 127.0.0.1:27003 0.0.0.0:* LISTEN 2232/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27003 0.0.0.0:* LISTEN 2232/mongod off (0.00/0/0)
tcp 0 0 192.168.103.100:27003 192.168.103.100:41201 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
tcp 0 0 192.168.103.100:41210 192.168.103.100:27003 ESTABLISHED 2232/mongod keepalive (123.49/0/0)
tcp 0 0 192.168.103.100:27003 192.168.103.100:41210 ESTABLISHED 2232/mongod keepalive (123.49/0/0)
tcp 0 0 192.168.103.100:41966 192.168.103.100:27002 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
tcp 0 0 192.168.103.100:41973 192.168.103.100:27002 ESTABLISHED 2232/mongod keepalive (123.49/0/0)
tcp 0 0 192.168.103.100:50658 192.168.103.100:27001 ESTABLISHED 2232/mongod keepalive (123.49/0/0)
tcp 0 0 192.168.103.100:50643 192.168.103.100:27001 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
tcp 0 0 192.168.103.100:27003 192.168.103.100:41195 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
tcp 0 0 192.168.103.100:27003 192.168.103.100:41207 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
tcp 0 0 192.168.103.100:41972 192.168.103.100:27002 ESTABLISHED 2232/mongod keepalive (117.34/0/0)
tcp 0 0 192.168.103.100:50659 192.168.103.100:27001 ESTABLISHED 2232/mongod keepalive (123.49/0/0)
tcp 0 0 192.168.103.100:41969 192.168.103.100:27002 ESTABLISHED 2232/mongod keepalive (106.08/0/0)
节点配置更新
本部分在上面 3 节点集群复制配制 基础上进行对其中一个节点的配置进行修改,修改 IP 地址。
MongoDB Enterprise m103-repl:PRIMARY> var cfg = rs.conf()
MongoDB Enterprise m103-repl:PRIMARY> cfg.members
[
{
"_id" : 0,
"host" : "192.168.103.100:27001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.103.100:27002",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.103.100:27003",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
]
MongoDB Enterprise m103-repl:PRIMARY> cfg.members[2].host = "m103:27003"
m103:27003
MongoDB Enterprise m103-repl:PRIMARY> rs.reconfig(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1554298617, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1554298617, 1),
"signature" : {
"hash" : BinData(0,"2aAhut/JLz1cCOJYOxPVEs05a1E="),
"keyId" : NumberLong("6675593429663088642")
}
}
}
添加和删除节点
本部分在上面 3 节点集群复制配制 基础上进行添加和删除节点。
1. 创建两个配制文件内容如下
mongod-repl-4.conf | arbiter.conf |
---|---|
|
|
$ mongod -f mongod-repl-4.conf
$ mongod -f arbiter.conf
MongoDB Enterprise m103-repl:PRIMARY> rs.add("192.168.103.100:27004")
{
"ok" : 1,
"operationTime" : Timestamp(1552900251, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900251, 1),
"signature" : {
"hash" : BinData(0,"tgvIK0IO8r7x2965MiC3GuBL4NM="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.addArb("192.168.103.100:28000")
{
"ok" : 1,
"operationTime" : Timestamp(1552900296, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900296, 1),
"signature" : {
"hash" : BinData(0,"3KERoIv/hxKNDo1Wh/UWvJC4c2U="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003",
"192.168.103.100:27004"
],
"arbiters" : [
"192.168.103.100:28000"
],
"setName" : "m103-repl",
"setVersion" : 9,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1552900328, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2019-03-18T09:12:08Z"),
"majorityOpTime" : {
"ts" : Timestamp(1552900328, 1),
"t" : NumberLong(1)
},
"majorityWriteDate" : ISODate("2019-03-18T09:12:08Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2019-03-18T09:12:09.222Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1552900328, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900328, 1),
"signature" : {
"hash" : BinData(0,"NYyWWkgAKi1u8fPZEUQdEc8U3ps="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.remove("192.168.103.100:28000")
{
"ok" : 1,
"operationTime" : Timestamp(1552900423, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900423, 1),
"signature" : {
"hash" : BinData(0,"dA94M4Nv2EhsJdq5mC8PjZgC8tY="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> var cfg = rs.conf()
MongoDB Enterprise m103-repl:PRIMARY> cfg.members[3].votes = 0
0
MongoDB Enterprise m103-repl:PRIMARY> cfg.members[3].hidden = true
true
MongoDB Enterprise m103-repl:PRIMARY> cfg.members[3].priority = 0
0
MongoDB Enterprise m103-repl:PRIMARY> rs.reconfig(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1552900605, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900605, 1),
"signature" : {
"hash" : BinData(0,"ibtCCQKaVLHIYaiQE/fNhGfYUFQ="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 11,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1552900698, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2019-03-18T09:18:18Z"),
"majorityOpTime" : {
"ts" : Timestamp(1552900698, 1),
"t" : NumberLong(1)
},
"majorityWriteDate" : ISODate("2019-03-18T09:18:18Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2019-03-18T09:18:28.633Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1552900698, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552900698, 1),
"signature" : {
"hash" : BinData(0,"iJnZ5UHzZ3AwTy4b0zXYtLFzv4o="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
集群中读写操作
$ mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 12,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000002"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1552913488, 1),
"t" : NumberLong(2)
},
"lastWriteDate" : ISODate("2019-03-18T12:51:28Z"),
"majorityOpTime" : {
"ts" : Timestamp(1552913488, 1),
"t" : NumberLong(2)
},
"majorityWriteDate" : ISODate("2019-03-18T12:51:28Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2019-03-18T12:51:36.762Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1552913488, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552913488, 1),
"signature" : {
"hash" : BinData(0,"lW8h9HG2b0kXlEhRC5V7sPvU1vY="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> use replSetTestDB
switched to db replSetTestDB
MongoDB Enterprise m103-repl:PRIMARY> db.new_collection.insert( { "student": "Matt Javaly", "grade": "A+" } )
WriteResult({ "nInserted" : 1 })
$ mongo --host "192.168.103.100:27002" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:SECONDARY> show dbs
2019-03-18T12:55:21.314+0000 E QUERY [thread1] Error: listDatabases failed:{
"operationTime" : Timestamp(1552913718, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1552913718, 1),
"signature" : {
"hash" : BinData(0,"L55vr/U4ScHAh55r5DI7x0DK4K8="),
"keyId" : NumberLong("6669632199739834369")
}
}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1
shellHelper.show@src/mongo/shell/utils.js:860:19
shellHelper@src/mongo/shell/utils.js:750:15
@(shellhelp2):1:1
MongoDB Enterprise m103-repl:SECONDARY> rs.slaveOk()
MongoDB Enterprise m103-repl:SECONDARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
replSetTestDB 0.000GB
MongoDB Enterprise m103-repl:SECONDARY> use replSetTestDB
switched to db replSetTestDB
MongoDB Enterprise m103-repl:SECONDARY> db.new_collection.find()
{ "_id" : ObjectId("5c8f94e3501302bdac004143"), "student" : "Matt Javaly", "grade" : "A+" }
MongoDB Enterprise m103-repl:SECONDARY> db.new_collection.insert( { "student": "Norberto Leite", "grade": "B+" } )
WriteResult({ "writeError" : { "code" : 10107, "errmsg" : "not master" } })
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
...
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2019-03-18T13:02:49.468Z"),
"lastHeartbeatRecv" : ISODate("2019-03-18T13:01:17.390Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : -1
},
...
MongoDB Enterprise m103-repl:PRIMARY> db.new_collection.insert( { "student": "Kylin Soong", "grade": "A+" } )
MongoDB Enterprise m103-repl:PRIMARY> db.new_collection.find()
{ "_id" : ObjectId("5c8f94e3501302bdac004143"), "student" : "Matt Javaly", "grade" : "A+" }
{ "_id" : ObjectId("5c8f9797501302bdac004144"), "student" : "Kylin Soong", "grade" : "A+" }
$ mongo --host "192.168.103.100:27003" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:SECONDARY> rs.slaveOk()
MongoDB Enterprise m103-repl:SECONDARY> use replSetTestDB
switched to db replSetTestDB
MongoDB Enterprise m103-repl:SECONDARY> db.new_collection.find()
{ "_id" : ObjectId("5c8f94e3501302bdac004143"), "student" : "Matt Javaly", "grade" : "A+" }
{ "_id" : ObjectId("5c8f9797501302bdac004144"), "student" : "Kylin Soong", "grade" : "A+" }
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
$ mongo --host "192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB shell version v3.6.11
connecting to: mongodb://192.168.103.100:27001/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("647ca0a3-af10-4216-9480-ebc517f11432") }
MongoDB server version: 3.6.11
MongoDB Enterprise m103-repl:SECONDARY>
MongoDB Enterprise m103-repl:SECONDARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 12,
"ismaster" : false,
"secondary" : true,
"me" : "192.168.103.100:27001",
故障转移及主节点选举
$ mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:PRIMARY> var cfg = rs.conf()
MongoDB Enterprise m103-repl:PRIMARY> cfg.members[2].priority = 0
0
MongoDB Enterprise m103-repl:PRIMARY> rs.reconfig(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1552919705, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1552919705, 1),
"signature" : {
"hash" : BinData(0,"p81iAKaoMrp/Y7u7VlWImY63Hws="),
"keyId" : NumberLong("6669632199739834369")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002"
],
"passives" : [
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 13,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000004"),
...
MongoDB Enterprise m103-repl:PRIMARY> rs.stepDown()
MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002"
],
"passives" : [
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 13,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27002",
"me" : "192.168.103.100:27002",
"electionId" : ObjectId("7fffffff0000000000000005"),
...
writeConcern
MongoDB 复制子集中通过 writeConcern
来确认写操作的可靠性。本部分说明 writeConcern
确认
$ mongo --host "192.168.103.100:27003" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB shell version v3.6.11
connecting to: mongodb://192.168.103.100:27003/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("fe6813ac-8f15-4e5f-b615-a8cef3584a84") }
MongoDB server version: 3.6.11
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
...
{
"_id" : 2,
"name" : "192.168.103.100:27003",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
...
MongoDB Enterprise m103-repl:PRIMARY> use testDatabase
switched to db testDatabase
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
WriteResult({
"nInserted" : 1,
"writeConcernError" : {
"code" : 64,
"codeName" : "WriteConcernFailed",
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
})
readConcern
$ mongoimport --drop --host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --db applicationData --collection products /dataset/products.json
2019-03-18T15:24:31.240+0000 connected to: m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003
2019-03-18T15:24:31.241+0000 dropping: applicationData.products
2019-03-18T15:24:34.225+0000 [#.......................] applicationData.products 5.08MB/87.9MB (5.8%)
2019-03-18T15:24:37.220+0000 [##......................] applicationData.products 10.3MB/87.9MB (11.7%)
2019-03-18T15:24:40.220+0000 [####....................] applicationData.products 15.5MB/87.9MB (17.6%)
2019-03-18T15:24:43.220+0000 [#####...................] applicationData.products 20.7MB/87.9MB (23.6%)
2019-03-18T15:24:46.221+0000 [######..................] applicationData.products 25.5MB/87.9MB (29.0%)
2019-03-18T15:24:49.220+0000 [########................] applicationData.products 30.6MB/87.9MB (34.8%)
2019-03-18T15:24:52.228+0000 [########................] applicationData.products 32.5MB/87.9MB (37.0%)
2019-03-18T15:24:55.220+0000 [#########...............] applicationData.products 34.2MB/87.9MB (38.9%)
2019-03-18T15:24:58.220+0000 [##########..............] applicationData.products 39.1MB/87.9MB (44.5%)
2019-03-18T15:25:01.220+0000 [###########.............] applicationData.products 44.0MB/87.9MB (50.0%)
2019-03-18T15:25:04.220+0000 [#############...........] applicationData.products 48.9MB/87.9MB (55.6%)
2019-03-18T15:25:07.221+0000 [##############..........] applicationData.products 53.9MB/87.9MB (61.3%)
2019-03-18T15:25:10.220+0000 [###############.........] applicationData.products 58.6MB/87.9MB (66.6%)
2019-03-18T15:25:13.220+0000 [#################.......] applicationData.products 63.7MB/87.9MB (72.4%)
2019-03-18T15:25:16.220+0000 [##################......] applicationData.products 69.0MB/87.9MB (78.4%)
2019-03-18T15:25:19.220+0000 [####################....] applicationData.products 73.7MB/87.9MB (83.8%)
2019-03-18T15:25:22.220+0000 [#####################...] applicationData.products 78.7MB/87.9MB (89.4%)
2019-03-18T15:25:25.220+0000 [######################..] applicationData.products 83.3MB/87.9MB (94.7%)
2019-03-18T15:25:27.518+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-03-18T15:25:27.518+0000 imported 516784 documents
$ mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:PRIMARY> use applicationData
switched to db applicationData
分片(Sharding)管理
配制一个分片集群
本部分在上面 3 节点集群复制配制 的基础上,配制一个 3 节点的 Config Server,及一个 mongos,构成一个最小化的分片集群。
$ mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
ngoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
...
2. 创建三个 Config Server 节点配制文件,内容如下
mongod-csrs-1.conf | mongod-csrs-2.conf |
---|---|
|
|
mongod-csrs-3.conf | |
---|---|
|
$ mkdir -p /var/mongodb/db/csrs{1,2,3}
$ mongod -f mongod-csrs-1.conf
$ mongod -f mongod-csrs-2.conf
$ mongod -f mongod-csrs-3.conf
$ ps -ef | grep mongod
vagrant 2368 1 0 09:37 ? 00:00:08 mongod -f mongod-repl-1.conf
vagrant 2398 1 0 09:37 ? 00:00:07 mongod -f mongod-repl-2.conf
vagrant 2428 1 0 09:37 ? 00:00:08 mongod -f mongod-repl-3.conf
vagrant 2789 1 0 09:56 ? 00:00:00 mongod -f mongod-csrs-1.conf
vagrant 2827 1 0 09:56 ? 00:00:00 mongod -f mongod-csrs-2.conf
vagrant 2864 1 0 09:56 ? 00:00:00 mongod -f mongod-csrs-3.conf
$ mongo --port 26001
MongoDB Enterprise > rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.103.100:26001",
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1552989761, 1),
"electionId" : ObjectId("000000000000000000000000")
}
}
MongoDB Enterprise m103-csrs:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-csrs:PRIMARY> db.createUser({user: "m103-admin", pwd: "m103-pass", roles: [{role: "root", db: "admin"}]})
Successfully added user: {
"user" : "m103-admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
MongoDB Enterprise m103-csrs:PRIMARY> db.auth("m103-admin", "m103-pass")
1
MongoDB Enterprise m103-csrs:PRIMARY> rs.add("192.168.103.100:26002")
{
"ok" : 1,
"operationTime" : Timestamp(1552989889, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1552989889, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1552989889, 1),
"signature" : {
"hash" : BinData(0,"L8929773rnlHXegrwczReqJ0uUk="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise m103-csrs:PRIMARY> rs.add("192.168.103.100:26003")
{
"ok" : 1,
"operationTime" : Timestamp(1552989893, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1552989893, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1552989893, 1),
"signature" : {
"hash" : BinData(0,"SkmXMO118gGEp3S9XAmunUo1omU="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise m103-csrs:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:26001",
"192.168.103.100:26002",
"192.168.103.100:26003"
],
"setName" : "m103-csrs",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:26001",
"me" : "192.168.103.100:26001",
"electionId" : ObjectId("7fffffff0000000000000001"),
...
10. 创建 mongos.conf 文件,内容如下
mongos.conf | |
---|---|
|
$ mongos -f mongos.conf
$ ps -ef | grep mongos
vagrant 5065 1 0 14:25 ? 00:00:00 mongos -f mongos.conf
$ sudo netstat -antulop | grep 5065
tcp 0 0 192.168.103.100:26000 0.0.0.0:* LISTEN 5065/mongos off (0.00/0/0)
tcp 0 0 127.0.0.1:26000 0.0.0.0:* LISTEN 5065/mongos off (0.00/0/0)
tcp 0 0 192.168.103.100:51380 192.168.103.100:26002 ESTABLISHED 5065/mongos keepalive (127.78/0/0)
tcp 0 0 192.168.103.100:51374 192.168.103.100:26002 ESTABLISHED 5065/mongos keepalive (96.04/0/0)
tcp 0 0 192.168.103.100:56908 192.168.103.100:26003 ESTABLISHED 5065/mongos keepalive (96.04/0/0)
tcp 0 0 192.168.103.100:44612 192.168.103.100:26001 ESTABLISHED 5065/mongos keepalive (96.04/0/0)
tcp 0 0 192.168.103.100:56913 192.168.103.100:26003 ESTABLISHED 5065/mongos keepalive (96.04/0/0)
tcp 0 0 192.168.103.100:44608 192.168.103.100:26001 ESTABLISHED 5065/mongos keepalive (96.04/0/0)
$ mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB shell version v3.6.11
connecting to: mongodb://127.0.0.1:26000/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("0691aaa5-4695-457a-b848-68819fdf5b75") }
MongoDB server version: 3.6.11
MongoDB Enterprise mongos>
MongoDB Enterprise mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5c90be43507efebbab5cc5e8")
}
shards:
active mongoses:
"3.6.11" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
15. 创建三个节点配制文件,添加 sharding 及 wiredTiger 配制,内容如下
mongod-repl-1.conf | mongod-repl-2.conf |
---|---|
|
|
mongod-repl-3.conf | |
---|---|
|
$ mongo --port 27002 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
$ mongod -f mongod-repl-2.conf
$ mongo --port 27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
$ mongod -f mongod-repl-3.conf
$ mongo --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise m103-repl:PRIMARY> rs.stepDown()
MongoDB Enterprise m103-repl:SECONDARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:SECONDARY> db.shutdownServer()
$ mongod -f mongod-repl-1.conf
$ mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise mongos> sh.addShard("m103-repl/192.168.103.100:27002")
{
"shardAdded" : "m103-repl",
"ok" : 1,
"operationTime" : Timestamp(1553007892, 9),
"$clusterTime" : {
"clusterTime" : Timestamp(1553007892, 9),
"signature" : {
"hash" : BinData(0,"Fkn0MODTuvcLjFT9uWPkcjfB1s0="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5c90be43507efebbab5cc5e8")
}
shards:
{ "_id" : "m103-repl", "host" : "m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003", "state" : 1 }
active mongoses:
"3.6.11" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
配制第二个分片
本部分在前面Config Server 复制子集
,分片一复制子集
,mongos
的基础上添加分片二复制子集
。
$ mongod -f mongod-csrs-1.conf
$ mongod -f mongod-csrs-2.conf
$ mongod -f mongod-csrs-3.conf
$ mongos -f mongos.conf
$ mongod -f mongod-repl-1.conf
$ mongod -f mongod-repl-2.conf
$ mongod -f mongod-repl-3.conf
$ ps -ef | grep mongo
vagrant 2202 1 1 02:14 ? 00:00:02 mongod -f mongod-csrs-1.conf
vagrant 2285 1 1 02:14 ? 00:00:02 mongod -f mongod-csrs-2.conf
vagrant 2371 1 1 02:14 ? 00:00:02 mongod -f mongod-csrs-3.conf
vagrant 2482 1 0 02:14 ? 00:00:00 mongos -f mongos.conf
vagrant 2519 1 1 02:15 ? 00:00:01 mongod -f mongod-repl-1.conf
vagrant 2615 1 1 02:15 ? 00:00:01 mongod -f mongod-repl-2.conf
vagrant 2720 1 1 02:15 ? 00:00:01 mongod -f mongod-repl-3.conf
$ mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5c90be43507efebbab5cc5e8")
}
shards:
{ "_id" : "m103-repl", "host" : "m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003", "state" : 1 }
active mongoses:
"3.6.11" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
6. 创建分片二的节点配制文件如下
mongod-repl-4.conf | mongod-repl-5.conf |
---|---|
|
|
mongod-repl-6.conf | |
---|---|
|
$ mkdir /var/mongodb/db/{4,5,6}
$ mongod -f mongod-repl-4.conf
$ mongod -f mongod-repl-5.conf
$ mongod -f mongod-repl-6.conf
$ mongo --port 27004
MongoDB Enterprise > rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.103.100:27004",
"ok" : 1
}
MongoDB Enterprise m103-repl-2:PRIMARY> use admin
switched to db admin
MongoDB Enterprise m103-repl-2:PRIMARY> db.createUser({user: "m103-admin", pwd: "m103-pass", roles: [{role: "root", db: "admin"}]})
Successfully added user: {
"user" : "m103-admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
MongoDB Enterprise m103-repl-2:PRIMARY> db.auth("m103-admin", "m103-pass")
1
MongoDB Enterprise m103-repl-2:PRIMARY> rs.add("192.168.103.100:27005")
{ "ok" : 1 }
MongoDB Enterprise m103-repl-2:PRIMARY> rs.add("192.168.103.100:27006")
{ "ok" : 1 }
MongoDB Enterprise m103-repl-2:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27004",
"192.168.103.100:27005",
"192.168.103.100:27006"
],
"setName" : "m103-repl-2",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27004",
"me" : "192.168.103.100:27004",
"electionId" : ObjectId("7fffffff0000000000000001"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1553223267, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2019-03-22T02:54:27Z"),
"majorityOpTime" : {
"ts" : Timestamp(1553223267, 1),
"t" : NumberLong(1)
},
"majorityWriteDate" : ISODate("2019-03-22T02:54:27Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2019-03-22T02:54:28.188Z"),
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1
}
$ mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
MongoDB Enterprise mongos> sh.addShard("m103-repl-2/192.168.103.100:27004")
{
"shardAdded" : "m103-repl-2",
"ok" : 1,
"operationTime" : Timestamp(1553223361, 10),
"$clusterTime" : {
"clusterTime" : Timestamp(1553223361, 10),
"signature" : {
"hash" : BinData(0,"/aXB+cCpajXDA6Uk5Y1hhp2BMjo="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5c90be43507efebbab5cc5e8")
}
shards:
{ "_id" : "m103-repl", "host" : "m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003", "state" : 1 }
{ "_id" : "m103-repl-2", "host" : "m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006", "state" : 1 }
active mongoses:
"3.6.11" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
$ mongoimport --drop /dataset/products.json --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --db m103 --collection products
2019-03-22T03:02:41.271+0000 connected to: localhost:26000
2019-03-22T03:02:41.272+0000 dropping: m103.products
2019-03-22T03:02:44.264+0000 [........................] m103.products 3.37MB/87.9MB (3.8%)
2019-03-22T03:02:47.264+0000 [........................] m103.products 3.37MB/87.9MB (3.8%)
2019-03-22T03:02:50.260+0000 [##......................] m103.products 8.12MB/87.9MB (9.2%)
2019-03-22T03:02:53.259+0000 [###.....................] m103.products 13.5MB/87.9MB (15.3%)
2019-03-22T03:02:56.259+0000 [#####...................] m103.products 18.4MB/87.9MB (20.9%)
2019-03-22T03:02:59.260+0000 [#####...................] m103.products 18.5MB/87.9MB (21.1%)
2019-03-22T03:03:02.259+0000 [#####...................] m103.products 21.9MB/87.9MB (24.9%)
2019-03-22T03:03:05.260+0000 [#######.................] m103.products 26.9MB/87.9MB (30.6%)
2019-03-22T03:03:08.260+0000 [#######.................] m103.products 29.1MB/87.9MB (33.1%)
2019-03-22T03:03:11.259+0000 [#######.................] m103.products 29.1MB/87.9MB (33.1%)
2019-03-22T03:03:14.259+0000 [#########...............] m103.products 33.0MB/87.9MB (37.5%)
2019-03-22T03:03:17.259+0000 [##########..............] m103.products 38.2MB/87.9MB (43.5%)
2019-03-22T03:03:20.260+0000 [###########.............] m103.products 43.4MB/87.9MB (49.4%)
2019-03-22T03:03:23.264+0000 [############............] m103.products 47.3MB/87.9MB (53.8%)
2019-03-22T03:03:26.260+0000 [############............] m103.products 47.3MB/87.9MB (53.8%)
2019-03-22T03:03:29.259+0000 [##############..........] m103.products 51.9MB/87.9MB (59.1%)
2019-03-22T03:03:32.259+0000 [###############.........] m103.products 57.0MB/87.9MB (64.8%)
2019-03-22T03:03:35.259+0000 [################........] m103.products 62.2MB/87.9MB (70.7%)
2019-03-22T03:03:38.261+0000 [##################......] m103.products 67.6MB/87.9MB (76.9%)
2019-03-22T03:03:41.259+0000 [###################.....] m103.products 72.7MB/87.9MB (82.7%)
2019-03-22T03:03:44.259+0000 [#####################...] m103.products 78.0MB/87.9MB (88.7%)
2019-03-22T03:03:47.260+0000 [######################..] m103.products 83.0MB/87.9MB (94.4%)
2019-03-22T03:03:49.688+0000 [########################] m103.products 87.9MB/87.9MB (100.0%)
2019-03-22T03:03:49.688+0000 imported 516784 documents
MongoDB Enterprise mongos> sh.enableSharding("m103")
{
"ok" : 1,
"operationTime" : Timestamp(1553224043, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1553224043, 5),
"signature" : {
"hash" : BinData(0,"YYMdriYT49B7C4xe86DPEhffldo="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise mongos> db.products.createIndex({"sku": 1})
{
"raw" : {
"m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003" : {
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
},
"ok" : 1,
"operationTime" : Timestamp(1553225658, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1553225658, 1),
"signature" : {
"hash" : BinData(0,"7mtbmtuvoV8oO17YDJjRTxaXMk0="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
MongoDB Enterprise mongos> db.adminCommand({shardCollection: "m103.products", key: {sku: 1}})
{
"collectionsharded" : "m103.products",
"collectionUUID" : UUID("a9f9e8d6-57f5-4b27-877e-d62e7bd3ad5a"),
"ok" : 1,
"operationTime" : Timestamp(1553225779, 8),
"$clusterTime" : {
"clusterTime" : Timestamp(1553225779, 8),
"signature" : {
"hash" : BinData(0,"VTCDFiglVcFm2nPyUHvfQm9km0Q="),
"keyId" : NumberLong("6670040243107790859")
}
}
}
分区结果查询 |
---|
mongos 上查询
|
分区一上查询
|
分区二上查
|
Note
|
可以看到 mongos 查询结果为导入文档的总数。分片一和分片二文档数相加就等于导入的文档总数。 |
20. 查看分片统计数据(config 数据库)
统计数据 |
---|
查看数据列表
共有三个数据库,m103 是分区的,主分区是 m103-repl-2。 |
查看分区的 collection
m103.products 的 key 用来用来分片。 |
查看分区列表
可以看到有两个分区,分别是 m103-repl 和 m103-repl-2 |
查看 chunks
可以看到 m103.products 有三个 chunks,sku 的范围分别是:
三个 chunks 的 ID 分别为: |
ConfigDB
$ mongoimport --drop products.json --port 27017 -u "root" -p "mongo" --authenticationDatabase "admin" --db testConfigDB --collection products
2019-06-02T16:34:48.382+0800 connected to: localhost:27017
2019-06-02T16:34:48.397+0800 dropping: testConfigDB.products
2019-06-02T16:34:51.366+0800 [###.....................] testConfigDB.products 11.7MB/87.9MB (13.3%)
2019-06-02T16:34:54.365+0800 [#######.................] testConfigDB.products 25.8MB/87.9MB (29.3%)
2019-06-02T16:34:57.362+0800 [##########..............] testConfigDB.products 39.6MB/87.9MB (45.0%)
2019-06-02T16:35:00.365+0800 [##############..........] testConfigDB.products 52.9MB/87.9MB (60.2%)
2019-06-02T16:35:03.367+0800 [##################......] testConfigDB.products 66.4MB/87.9MB (75.5%)
2019-06-02T16:35:06.362+0800 [#####################...] testConfigDB.products 80.6MB/87.9MB (91.6%)
2019-06-02T16:35:07.746+0800 [########################] testConfigDB.products 87.9MB/87.9MB (100.0%)
2019-06-02T16:35:07.746+0800 imported 516784 documents
use testConfigDB
db.products.count()
516784
db.products.aggregate([{$project: {_id: 0, sku: 1}}, {$sort: {sku: 1}}, {$limit: 20}])
db.products.aggregate([{$project: {_id: 0, sku: 1}}, {$sort: {sku: -1}}, {$limit: 20}])
db.products.aggregate([{$match: {sku: {$lt: 20000044}}}, {$project: {_id: 0, sku: 1}}])
db.products.aggregate([{$match: {sku: {$gt: 9999265500050003}}}, {$project: {_id: 0, sku: 1}}])
use config
db.mongos.find()
db.shards.find()
{ "_id" : "repl-a", "host" : "repl-a/localhost:27000,localhost:27001,localhost:27002", "state" : 1 }
{ "_id" : "repl-b", "host" : "repl-b/localhost:28000,localhost:28001,localhost:28002", "state" : 1 }
db.databases.find()
db.collections.find()
db.chunks.find()
sh.enableSharding("testConfigDB")
use testConfigDB
db.products.stats()
...
"ns" : "testConfigDB.products",
"count" : 516784,
"size" : 79714444,
"storageSize" : 25968640,
"totalIndexSize" : 4947968,
"indexSizes" : {
"_id_" : 4947968
},
db.products.createIndex({sku: 1})
db.products.stats()
...
"ns" : "testConfigDB.products",
"count" : 516784,
"size" : 79714444,
"storageSize" : 25968640,
"totalIndexSize" : 10063872,
"indexSizes" : {
"_id_" : 4947968,
"sku_1" : 5115904
},
db.adminCommand({shardCollection: "testConfigDB.products", key: {sku: 1}})
sh.status()
...
databases:
{ "_id" : "testConfigDB", "primary" : "repl-b", "partitioned" : true, "version" : { "uuid" : UUID("03d0bae2-2dcd-4311-8802-5b9cc157ea22"), "lastMod" : 1 } }
testConfigDB.products
shard key: { "sku" : 1 }
unique: false
balancing: true
chunks:
repl-a 1
repl-b 2
{ "sku" : { "$minKey" : 1 } } -->> { "sku" : 23153496 } on : repl-a Timestamp(2, 0)
{ "sku" : 23153496 } -->> { "sku" : 28928914 } on : repl-b Timestamp(2, 1)
{ "sku" : 28928914 } -->> { "sku" : { "$maxKey" : 1 } } on : repl-b Timestamp(1, 2)
use config
db.chunks.aggregate({$match: {ns: {$eq: "testConfigDB.products"}}},{$project: {_id: 0, min: "$min.sku", max: "$max.sku", shard: 1}})
{ "shard" : "repl-a", "min" : { "$minKey" : 1 }, "max" : 23153496 }
{ "shard" : "repl-b", "min" : 23153496, "max" : 28928914 }
{ "shard" : "repl-b", "min" : 28928914, "max" : { "$maxKey" : 1 } }
db.collections.find()
$ mongo testConfigDB -u root -p mongo --authenticationDatabase "admin" --host "repl-a/localhost:27000,localhost:27001,localhost:27002" --eval 'db.products.count()'
217885
$ mongo testConfigDB -u root -p mongo --authenticationDatabase "admin" --host "repl-b/localhost:28000,localhost:28001,localhost:28002" --eval 'db.products.count()'
298899
$ mongo testConfigDB -u root -p mongo --authenticationDatabase "admin" --eval 'db.products.count()'
516784
Hashed Sharding
$ mongoimport --drop products.json --port 27017 -u "root" -p "mongo" --authenticationDatabase "admin" --db testHashedSharding --collection products
2019-06-02T18:54:14.607+0800 connected to: localhost:27017
2019-06-02T18:54:14.621+0800 dropping: testHashedSharding.products
2019-06-02T18:54:17.586+0800 [###.....................] testHashedSharding.products 12.0MB/87.9MB (13.7%)
2019-06-02T18:54:20.589+0800 [######..................] testHashedSharding.products 25.1MB/87.9MB (28.6%)
2019-06-02T18:54:23.588+0800 [##########..............] testHashedSharding.products 38.2MB/87.9MB (43.5%)
2019-06-02T18:54:26.588+0800 [#############...........] testHashedSharding.products 51.3MB/87.9MB (58.3%)
2019-06-02T18:54:29.589+0800 [#################.......] testHashedSharding.products 64.5MB/87.9MB (73.4%)
2019-06-02T18:54:32.589+0800 [#####################...] testHashedSharding.products 77.6MB/87.9MB (88.3%)
2019-06-02T18:54:34.695+0800 [########################] testHashedSharding.products 87.9MB/87.9MB (100.0%)
use testHashedSharding
db.products.count()
516784
db.products.distinct('sku').length
516784
use config
db.databases.find({}, {_id: 0, primary: 1, partitioned: 1, lastMod: 1})
{ "primary" : "repl-b", "partitioned" : true }
{ "primary" : "repl-a", "partitioned" : false }
sh.enableSharding("testHashedSharding")
use testHashedSharding
db.products.createIndex({sku: "hashed"})
sh.shardCollection("testHashedSharding.products", {sku: "hashed"})
sh.status()
...
databases:
{ "_id" : "testHashedSharding", "primary" : "repl-a", "partitioned" : true, "version" : { "uuid" : UUID("517a7dd2-4d4d-4167-b0a8-cfba67ecd0d5"), "lastMod" : 1 } }
testHashedSharding.products
shard key: { "sku" : "hashed" }
unique: false
balancing: true
chunks:
repl-a 2
repl-b 1
{ "sku" : { "$minKey" : 1 } } -->> { "sku" : NumberLong("-1442199500577127961") } on : repl-b Timestamp(2, 0)
{ "sku" : NumberLong("-1442199500577127961") } -->> { "sku" : NumberLong("6331935390792935387") } on : repl-a Timestamp(2, 1)
{ "sku" : NumberLong("6331935390792935387") } -->> { "sku" : { "$maxKey" : 1 } } on : repl-a Timestamp(1, 2)
use config
db.databases.find({}, {_id: 0, primary: 1, partitioned: 1, lastMod: 1})
{ "primary" : "repl-a", "partitioned" : true }
{ "primary" : "repl-b", "partitioned" : true }
db.chunks.aggregate({$match: {ns: {$eq: "testHashedSharding.products"}}},{$project: {_id: 0, min: "$min.sku", max: "$max.sku", shard: 1}})
{ "shard" : "repl-b", "min" : { "$minKey" : 1 }, "max" : NumberLong("-1442199500577127961") }
{ "shard" : "repl-a", "min" : NumberLong("-1442199500577127961"), "max" : NumberLong("6331935390792935387") }
{ "shard" : "repl-a", "min" : NumberLong("6331935390792935387"), "max" : { "$maxKey" : 1 } }
$ mongo testHashedSharding -u root -p mongo --authenticationDatabase "admin" --host "repl-a/localhost:27000,localhost:27001,localhost:27002" --eval 'db.products.count()'
298899
$ mongo testHashedSharding -u root -p mongo --authenticationDatabase "admin" --host "repl-b/localhost:28000,localhost:28001,localhost:28002" --eval 'db.products.count()'
217885
$ mongo testHashedSharding -u root -p mongo --authenticationDatabase "admin" --eval 'db.products.count()'
516784
Chunks
use testChunks
var doc = {
"name": "John Doe",
"balance": 99.99
}
for (var i = 0; i < 100000; i++) {
doc.accountNo = i
db.accounts.insertOne( doc )
}
sh.enableSharding("testChunks")
db.accounts.createIndex({accountNo: 1})
sh.shardCollection("testChunks.accounts", {accountNo: 1})
db.accounts.getShardDistribution()
Shard repl-b at repl-b/localhost:28000,localhost:28001,localhost:28002
data : 7.34MiB docs : 100000 chunks : 1
estimated data per chunk : 7.34MiB
estimated docs per chunk : 100000
Totals
data : 7.34MiB docs : 100000 chunks : 1
Shard repl-b contains 100% data, 100% docs in cluster, avg obj size on shard : 77B
use config
db.databases.find({}, {_id: 1, primary: 1, partitioned: 1, lastMod: 1})
{ "_id" : "testHashedSharding", "primary" : "repl-a", "partitioned" : true }
{ "_id" : "testConfigDB", "primary" : "repl-b", "partitioned" : true }
{ "_id" : "testChunks", "primary" : "repl-b", "partitioned" : true }
db.chunks.aggregate({$match: {ns: {$eq: "testChunks.accounts"}}},{$project: {_id: 0, min: "$min.accountNo", max: "$max.accountNo", shard: 1}})
{ "shard" : "repl-b", "min" : { "$minKey" : 1 }, "max" : { "$maxKey" : 1 } }
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(10000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(20000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(30000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(40000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(50000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(60000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(70000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(80000)})
sh.splitAt("testChunks.accounts", {accountNo: NumberLong(90000)})
db.accounts.getShardDistribution()
Shard repl-a at repl-a/localhost:27000,localhost:27001,localhost:27002
data : 3.67MiB docs : 50000 chunks : 5
estimated data per chunk : 751KiB
estimated docs per chunk : 10000
Shard repl-b at repl-b/localhost:28000,localhost:28001,localhost:28002
data : 7.34MiB docs : 100000 chunks : 5
estimated data per chunk : 1.46MiB
estimated docs per chunk : 20000
Totals
data : 11.01MiB docs : 150000 chunks : 10
Shard repl-a contains 33.33% data, 33.33% docs in cluster, avg obj size on shard : 77B
Shard repl-b contains 66.66% data, 66.66% docs in cluster, avg obj size on shard : 77B
sh.status()
...
{ "_id" : "testChunks", "primary" : "repl-b", "partitioned" : true, "version" : { "uuid" : UUID("2da4784a-fa07-4a74-b4a7-bcbfa6729a4a"), "lastMod" : 1 } }
testChunks.accounts
shard key: { "accountNo" : 1 }
unique: false
balancing: true
chunks:
repl-a 5
repl-b 5
{ "accountNo" : { "$minKey" : 1 } } -->> { "accountNo" : NumberLong(10000) } on : repl-a Timestamp(2, 0)
{ "accountNo" : NumberLong(10000) } -->> { "accountNo" : NumberLong(20000) } on : repl-a Timestamp(3, 0)
{ "accountNo" : NumberLong(20000) } -->> { "accountNo" : NumberLong(30000) } on : repl-a Timestamp(4, 0)
{ "accountNo" : NumberLong(30000) } -->> { "accountNo" : NumberLong(40000) } on : repl-a Timestamp(5, 0)
{ "accountNo" : NumberLong(40000) } -->> { "accountNo" : NumberLong(50000) } on : repl-a Timestamp(6, 0)
{ "accountNo" : NumberLong(50000) } -->> { "accountNo" : NumberLong(60000) } on : repl-b Timestamp(6, 1)
{ "accountNo" : NumberLong(60000) } -->> { "accountNo" : NumberLong(70000) } on : repl-b Timestamp(4, 2)
{ "accountNo" : NumberLong(70000) } -->> { "accountNo" : NumberLong(80000) } on : repl-b Timestamp(5, 2)
{ "accountNo" : NumberLong(80000) } -->> { "accountNo" : NumberLong(90000) } on : repl-b Timestamp(5, 4)
{ "accountNo" : NumberLong(90000) } -->> { "accountNo" : { "$maxKey" : 1 } } on : repl-b Timestamp(5, 5)
db.chunks.aggregate({$match: {ns: {$eq: "testChunks.accounts"}}},{$project: {_id: 0, min: "$min.accountNo", max: "$max.accountNo", shard: 1}})
{ "shard" : "repl-a", "min" : { "$minKey" : 1 }, "max" : NumberLong(10000) }
{ "shard" : "repl-a", "min" : NumberLong(10000), "max" : NumberLong(20000) }
{ "shard" : "repl-a", "min" : NumberLong(20000), "max" : NumberLong(30000) }
{ "shard" : "repl-a", "min" : NumberLong(30000), "max" : NumberLong(40000) }
{ "shard" : "repl-a", "min" : NumberLong(40000), "max" : NumberLong(50000) }
{ "shard" : "repl-b", "min" : NumberLong(50000), "max" : NumberLong(60000) }
{ "shard" : "repl-b", "min" : NumberLong(60000), "max" : NumberLong(70000) }
{ "shard" : "repl-b", "min" : NumberLong(70000), "max" : NumberLong(80000) }
{ "shard" : "repl-b", "min" : NumberLong(80000), "max" : NumberLong(90000) }
{ "shard" : "repl-b", "min" : NumberLong(90000), "max" : { "$maxKey" : 1 } }
$ mongo testChunks -u root -p mongo --authenticationDatabase "admin" --host "repl-a/localhost:27000,localhost:27001,localhost:27002" --eval 'db.accounts.count()'
50000
$ mongo testChunks -u root -p mongo --authenticationDatabase "admin" --host "repl-b/localhost:28000,localhost:28001,localhost:28002" --eval 'db.accounts.count()'
50000
$ mongo testChunks -u root -p mongo --authenticationDatabase "admin" --eval 'db.accounts.count()'
100000
Routed Queries 和 Scatter Gather
如果查询基于 shard key 时则 mongos 会直接路由相应请求到对应的分区;反之则请求发送到所有分区,本部分基于 Chunks 数据库对比两种查询。
db.accounts.find({accountNo: 27008}).explain().queryPlanner.winningPlan
{
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "repl-a",
"connectionString" : "repl-a/localhost:27000,localhost:27001,localhost:27002",
"serverInfo" : {
"host" : "Kylins-MacBook-Pro.local",
"port" : 27000,
"version" : "4.0.7",
"gitVersion" : "1b82c812a9c0bbf6dc79d5400de9ea99e6ffa025"
},
"plannerVersion" : 1,
"namespace" : "testChunks.accounts",
"indexFilterSet" : false,
"parsedQuery" : {
"accountNo" : {
"$eq" : 27008
}
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "SHARDING_FILTER",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"accountNo" : 1
},
"indexName" : "accountNo_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"accountNo" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"accountNo" : [
"[27008.0, 27008.0]"
]
}
}
}
},
"rejectedPlans" : [ ]
}
]
}
db.accounts.find({accountNo: {$gt: 27008, $lt: 67000}}).explain().queryPlanner.winningPlan
{
"stage" : "SHARD_MERGE",
"shards" : [
{
"shardName" : "repl-a",
"connectionString" : "repl-a/localhost:27000,localhost:27001,localhost:27002",
"serverInfo" : {
"host" : "Kylins-MacBook-Pro.local",
"port" : 27000,
"version" : "4.0.7",
"gitVersion" : "1b82c812a9c0bbf6dc79d5400de9ea99e6ffa025"
},
"plannerVersion" : 1,
"namespace" : "testChunks.accounts",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"accountNo" : {
"$lt" : 67000
}
},
{
"accountNo" : {
"$gt" : 27008
}
}
]
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "SHARDING_FILTER",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"accountNo" : 1
},
"indexName" : "accountNo_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"accountNo" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"accountNo" : [
"(27008.0, 67000.0)"
]
}
}
}
},
"rejectedPlans" : [ ]
},
{
"shardName" : "repl-b",
"connectionString" : "repl-b/localhost:28000,localhost:28001,localhost:28002",
"serverInfo" : {
"host" : "Kylins-MacBook-Pro.local",
"port" : 28000,
"version" : "4.0.7",
"gitVersion" : "1b82c812a9c0bbf6dc79d5400de9ea99e6ffa025"
},
"plannerVersion" : 1,
"namespace" : "testChunks.accounts",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"accountNo" : {
"$lt" : 67000
}
},
{
"accountNo" : {
"$gt" : 27008
}
}
]
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "SHARDING_FILTER",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"accountNo" : 1
},
"indexName" : "accountNo_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"accountNo" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"accountNo" : [
"(27008.0, 67000.0)"
]
}
}
}
},
"rejectedPlans" : [ ]
}
]
}