mongdb下载副本集根据什么投票

2被浏览98分享邀请回答0添加评论分享收藏感谢收起mongodb分片与副本集详细配置方案_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
mongodb分片与副本集详细配置方案
&&该文档详细讲述了对三个服务器的mongodb进行副本集与分片的部署和测试方案。对初学mongodb集群的有所帮助。
阅读已结束,下载本文需要
想免费下载更多文档?
定制HR最喜欢的简历
下载文档到电脑,同时保存到云知识,更方便管理
还剩4页未读,继续阅读
定制HR最喜欢的简历
你可能喜欢[ MongoDB ] 副本集的搭建及测试
时间: 10:21:34
&&&& 阅读:325
&&&& 评论:
&&&& 收藏:0
标签:&&& Replica Sets& 复制 (副本集)node1: 10.0.0.10node2: 10.0.0.11node3: 10.0.0.12副本集结构图:
MongoDB程序,配置文件,启动脚本地址:链接:http://pan.baidu.com/s/1hslX7Ju 密码:jlei
node1 部署:
# 拷贝到其他两个节点上。
[ ~]# scp mongodb-linux-x86_64-rhel62-3.2.8.tgz 10.0.0.11:/root/
[ ~]# scp mongodb-linux-x86_64-rhel62-3.2.8.tgz 10.0.0.12:/root/
[ ~]# tar xf mongodb-linux-x86_64-rhel62-3.2.8.tgz -C /usr/local/
[ ~]# ln -vs /usr/local/mongodb-linux-x86_64-rhel62-3.2.8
/usr/local/mongodb
`/usr/local/mongodb‘ -& `/usr/local/mongodb-linux-x86_64-rhel62-3.2.8
/etc/profile.d/mongod.conf
[ ~]# source /etc/profile.d/mongod.conf
# 建立运行mongodb服务的用户
[ ~]# useradd -s /sbin/nologin mongod
# 创建mongodb配置文件及数据存放位置
[ ~]# mkdir -pv /mongodb/{conf,log,data}
# 编写mongodb配置文件
[ ~]# vim /mongodb/conf/mongod.conf
systemLog:
destination: file
###日志存储位置
path: /mongodb/log/mongod.log
logAppend: true
##journal配置
enabled: true
##数据文件存储位置
dbPath: /mongodb/data/
##是否一个库一个文件夹
directoryPerDB: true
##数据引擎
engine: wiredTiger
##WT引擎配置
wiredTiger:
engineConfig:
##WT最大使用cache(根据服务器实际情况调节)
cacheSizeGB: 10
##是否将索引也按数据库名单独存储
directoryForIndexes: true
##表压缩配置
collectionConfig:
blockCompressor: zlib
##索引配置
indexConfig:
prefixCompression: true
processManagement:
fork: true
# fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid
##端口配置
port: 27017
bindIp: 10.0.0.10
# 配置副本集重要参数
replication:
oplogSizeMB: 20
replSetName: rs0
# 创建日志文件
[ ~]# touch /mongodb/log/mongod.log
[ ~]# chown -R mongod:mongod /mongodb/
# 编写服务启动脚本,该代码比较长,这里不再展示,
[ ~]# vim /etc/init.d/mongod
[ ~]# service mongod start
Starting mongod:
[ ~]# netstat -ntplu | grep mongod
0 10.0.0.10:27017
1592/mongod
node2、node3 配置和node1一样。
[ data]# service mongod start
Starting mongod:
[ data]# netstat -ntplu | grep mongod
0 10.0.0.11:27017
1621/mongod
[ ~]# service mongod start
Starting mongod:
[ ~]# netstat -ntplu | grep mongod
0 10.0.0.12:27017
1996/mongod
node1 上进行配置
[ ~]# mongo 10.0.0.10:27017
& rs.status()
"info" : "run rs.initiate(...) if not yet done for the set",
"errmsg" : "no replset config has been received",
"code" : 94
# 初始化副本集
& rs.initiate()
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "10.0.0.10:27017",
# 添加node2和node3成员
rs0:PRIMARY& rs.add(‘10.0.0.11:27017‘)
{ "ok" : 1 }
rs0:PRIMARY& rs.add(‘10.0.0.12:27017‘)
{ "ok" : 1 }
rs0:PRIMARY& rs.status()
"set" : "rs0",
"date" : ISODate("T09:37:42.412Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
"_id" : 0,
"name" : "10.0.0.10:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 843,
"optime" : {
"ts" : Timestamp(, 1),
"t" : NumberLong(1)
"optimeDate" : ISODate("T09:37:05Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(, 2),
"electionDate" : ISODate("T09:36:07Z"),
"configVersion" : 3,
"self" : true
"_id" : 1,
"name" : "10.0.0.11:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 41,
"optime" : {
"ts" : Timestamp(, 1),
"t" : NumberLong(1)
"optimeDate" : ISODate("T09:37:05Z"),
"lastHeartbeat" : ISODate("T09:37:41.063Z"),
"lastHeartbeatRecv" : ISODate("T09:37:42.065Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.0.0.10:27017",
"configVersion" : 3
"_id" : 2,
"name" : "10.0.0.12:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 37,
"optime" : {
"ts" : Timestamp(, 1),
"t" : NumberLong(1)
"optimeDate" : ISODate("T09:37:05Z"),
"lastHeartbeat" : ISODate("T09:37:41.063Z"),
"lastHeartbeatRecv" : ISODate("T09:37:42.065Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.0.0.10:27017",
"configVersion" : 3
rs0:PRIMARY& rs.isMaster()
"hosts" : [
"10.0.0.10:27017",
"10.0.0.11:27017",
"10.0.0.12:27017"
"setName" : "rs0",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "10.0.0.10:27017",
# primary节点是:10.0.0.10:27017
"me" : "10.0.0.10:27017",
# 当前节点是:10.0.0.10:27017
"electionId" : ObjectId("7fffffff0001"),
"maxBsonObjectSize" : ,
"maxMessageSizeBytes" : ,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("T09:38:07.799Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
测试:&& &通过primary节点添加1万条数据,看其他两个SECONDARY节点是否同步
rs0:PRIMARY& for(var i=1;i&=10000;i++) db.users.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"});
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY& show dbs
rs0:PRIMARY& use test
switched to db test
rs0:PRIMARY& show collections
rs0:PRIMARY& db.users.find()
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff5"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff6"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff7"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff8"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff9"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffa"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffb"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffc"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffd"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffe"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8fff"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9000"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9001"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9002"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9003"), "id" : 15, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9004"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9005"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9006"), "id" : 18, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9007"), "id" : 19, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9008"), "id" : 20, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
# 查看 SECONDARY 节点
rs0:SECONDARY& show dbs
2015-12-07T03:32:41.419+0800 E QUERY
[thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435 } :
/mongo/shell/utils.js:25:13
/mongo/shell/mongo.js:62:1
/mongo/shell/utils.js:761:19
s/mongo/shell/utils.js:651:15
@(shellhelp2):1:1
# 首次在SECONDARY 访问集合的时候是不允许的。
rs0:SECONDARY& rs.slaveOk()
# 首次访问集合需要执行rs.slaveOk()
rs0:SECONDARY& show dbs
rs0:SECONDARY& use test
switched to db test
rs0:SECONDARY& db.users.find()
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff5"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffd"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff6"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff7"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffe"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8fff"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffc"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff8"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ff9"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffa"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e8ffb"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9000"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9001"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9008"), "id" : 20, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9006"), "id" : 18, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9005"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9007"), "id" : 19, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9009"), "id" : 21, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9002"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("57aeecd04f49c2b3d60e9004"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
以上结果就说明同步成功。
&down掉PRIMARY服务,看看会出现什么状况
[ ~]# service mongod stop
Stopping mongod:
rs0:PRIMARY& rs.status()
"set" : "rs0",
"date" : ISODate("T19:36:07.801Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
"_id" : 0,
"name" : "10.0.0.10:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
# 检查到node1节点不健康。
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
"optimeDate" : ISODate("T00:00:00Z"),
"lastHeartbeat" : ISODate("T19:36:06.814Z"),
"lastHeartbeatRecv" : ISODate("T19:35:37.559Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
"_id" : 1,
"name" : "10.0.0.11:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
将PRIMARY转交给node2成员
"uptime" : 1430,
"optime" : {
"ts" : Timestamp(, 2222),
"t" : NumberLong(2)
"optimeDate" : ISODate("T09:48:04Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(, 2221),
"electionDate" : ISODate("T09:48:04Z"),
"configVersion" : 3,
"self" : true
"_id" : 2,
"name" : "10.0.0.12:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 965,
"optime" : {
"ts" : Timestamp(, 2222),
"t" : NumberLong(2)
"optimeDate" : ISODate("T09:48:04Z"),
"lastHeartbeat" : ISODate("T19:36:06.813Z"),
"lastHeartbeatRecv" : ISODate("T19:36:07.513Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.0.0.11:27017",
"configVersion" : 3
经过以上测试,实现了MongoDB副本集的高可用。标签:原文地址:http://www.cnblogs.com/hukey/p/5769548.html
&&国之画&&&& &&&&chrome插件&&
版权所有 京ICP备号-2
迷上了代码!博主最新文章
博主热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)&&&&& 在了解了之后,可以进行该篇文章的说明和测试。MongoDB 副本集(Replica Set)是有自动故障恢复功能的主从集群,有一个Primary节点和一个或多个Secondary节点组成。类似于MySQL的MMM架构。更多关于副本集的介绍请见。也可以在google、baidu上查阅。
& & &&副本集中数据同步过程:Primary节点写入数据,Secondary通过读取Primary的oplog得到复制信息,开始复制数据并且将复制信息写入到自己的oplog。如果某个操作失败,则备份节点停止从当前数据源复制数据。如果某个备份节点由于某些原因挂掉了,当重新启动后,就会自动从oplog的最后一个操作开始同步,同步完成后,将信息写入自己的oplog,由于复制操作是先复制数据,复制完成后再写入oplog,有可能相同的操作会同步两份,不过MongoDB在设计之初就考虑到这个问题,将oplog的同一个操作执行多次,与执行一次的效果是一样的。简单的说就是:
当Primary节点完成数据操作后,Secondary会做出一系列的动作保证数据的同步:1:检查自己local库的oplog.rs集合找出最近的时间戳。2:检查Primary节点local库oplog.rs集合,找出大于此时间戳的记录。3:将找到的记录插入到自己的oplog.rs集合中,并执行这些操作。
&& & & 副本集的同步和主从同步一样,都是异步同步的过程,不同的是副本集有个自动故障转移的功能。其原理是:slave端从primary端获取日志,然后在自己身上完全顺序的执行日志所记录的各种操作(该日志是不记录查询操作的),这个日志就是local数据 库中的oplog.rs表,默认在64位机器上这个表是比较大的,占磁盘大小的5%,oplog.rs的大小可以在启动参数中设 定:--oplogSize 1000,单位是M。
&&&&& 注意:在副本集的环境中,要是所有的Secondary都宕机了,只剩下Primary。最后Primary会变成Secondary,不能提供服务。
一:环境搭建
1:准备服务器
192.168.200.25
192.168.200.245
192.168.200.252
http://www.cnblogs.com/zhoujinyi/archive//3113868.html
3:修改配置,只需要开启:replSet 参数即可。格式为:
192.168.200.252: --replSet = mmm/192.168.200.245:27017
# mmm是副本集的名称,192.168.200.25:27017 为实例的位子。
192.168.200.245: --replSet = mmm/192.168.200.252:27017
192.168.200.25: --replSet = mmm/192.168.200.252:27017,192.168.200.245:27017
启动后会提示:
replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
说明需要进行初始化操作,初始化操作只能执行一次。
5:初始化副本集
登入任意一台机器的MongoDB执行:因为是全新的副本集所以可以任意进入一台执行;要是有一台有数据,则需要在有数据上执行;要多台有数据则不能初始化。
zhoujy@zhoujy:~$ mongo --host=192.168.200.252
MongoDB shell version: 2.4.6
connecting to: 192.168.200.252:27017/test
& rs.initiate({"_id":"mmm","members":[
... {"_id":1,
... "host":"192.168.200.252:27017",
... "priority":1
... {"_id":2,
... "host":"192.168.200.245:27017",
... "priority":1
"info" : "Config now saved locally.
Should come online in about a minute.",
######"_id": 副本集的名称"members": 副本集的服务器列表"_id": 服务器的唯一ID"host": 服务器主机"priority": 是优先级,默认为1,优先级0为被动节点,不能成为活跃节点。优先级不位0则按照有大到小选出活跃节点。"arbiterOnly": 仲裁节点,只参与投票,不接收数据,也不能成为活跃节点。
& rs.status()
"set" : "mmm",
"date" : ISODate("T04:03:53Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 76,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T04:03:11Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 35,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T04:03:11Z"),
"lastHeartbeat" : ISODate("T04:03:52Z"),
"lastHeartbeatRecv" : ISODate("T04:03:53Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
查看252上的日志:
Tue Feb 18 12:03:29.334 [rsMgr] replSet PRIMARY
Tue Feb 18 12:03:40.341 [rsHealthPoll] replSet member 192.168.200.245:27017 is now in state SECONDARY
至此,整个副本集已经搭建成功了。
上面的的副本集只有2台服务器,还有一台怎么添加?除了在初始化的时候添加,还有什么方法可以后期增删节点?二:维护操作
1:增删节点。
把25服务加入到副本集中:
rs.add("192.168.200.25:27017")
mmm:PRIMARY& rs.add("192.168.200.25:27017")
{ "ok" : 1 }
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T04:53:00Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3023,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T04:52:57Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2982,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T04:52:57Z"),
"lastHeartbeat" : ISODate("T04:52:59Z"),
"lastHeartbeatRecv" : ISODate("T04:53:00Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 6,
"stateStr" : "UNKNOWN",
#等一会就变成了 SECONDARY
"uptime" : 3,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("T00:00:00Z"),
"lastHeartbeat" : ISODate("T04:52:59Z"),
"lastHeartbeatRecv" : ISODate("T00:00:00Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "still initializing"
把25服务从副本集中删除:
rs.remove("192.168.200.25:27017")
mmm:PRIMARY& rs.remove("192.168.200.25:27017")
Tue Feb 18 13:01:09.298 DBClientCursor::init call() failed
Tue Feb 18 13:01:09.299 Error: error doing query: failed at src/mongo/shell/query.js:78
Tue Feb 18 13:01:09.300 trying reconnect to 192.168.200.252:27017
Tue Feb 18 13:01:09.301 reconnect 192.168.200.252:27017 ok
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T05:01:19Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3522,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T05:01:09Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 10,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T05:01:09Z"),
"lastHeartbeat" : ISODate("T05:01:19Z"),
"lastHeartbeatRecv" : ISODate("T05:01:18Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 192.168.200.252:27017",
"syncingTo" : "192.168.200.252:27017"
192.168.200.25 的节点已经被移除。
2:查看复制的情况
&db.printSlaveReplicationInfo()
mmm:PRIMARY& db.printSlaveReplicationInfo()
192.168.200.245:27017
syncedTo: Tue Feb 18 2014 13:02:35 GMT+0800 (CST)
= 145 secs ago (0.04hrs)
192.168.200.25:27017
syncedTo: Tue Feb 18 2014 13:02:35 GMT+0800 (CST)
= 145 secs ago (0.04hrs)
source:从库的ip和端口。
syncedTo:目前的同步情况,以及最后一次同步的时间。
从上面可以看出,在数据库内容不变的情况下他是不同步的,数据库变动就会马上同步。
3:查看副本集的状态
rs.status()
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T05:12:28Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4191,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T05:02:35Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 679,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T05:02:35Z"),
"lastHeartbeat" : ISODate("T05:12:27Z"),
"lastHeartbeatRecv" : ISODate("T05:12:27Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 593,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T05:02:35Z"),
"lastHeartbeat" : ISODate("T05:12:28Z"),
"lastHeartbeatRecv" : ISODate("T05:12:28Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
4:副本集的配置
rs.conf()/rs.config()
mmm:PRIMARY& rs.conf()
"_id" : "mmm",
"version" : 4,
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017"
"_id" : 3,
"host" : "192.168.200.25:27017"
5:操作Secondary
默认情况下,Secondary是不提供服务的,即不能读和写。会提示:error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
在特殊情况下需要读的话则需要:rs.slaveOk() ,只对当前连接有效。
mmm:SECONDARY& db.test.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
mmm:SECONDARY& rs.slaveOk()
mmm:SECONDARY& db.test.find()
{ "_id" : ObjectId("5302edfa8c8e"), "a" : 1 }
6:更新ing
1:测试副本集数据复制功能
在Primary(192.168.200.252:27017)上插入数据:
mmm:PRIMARY& for(var i=0;i&10000;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY& db.test.count()
在Secondary上查看是否已经同步:
mmm:SECONDARY& rs.slaveOk()
mmm:SECONDARY& db.test.count()
数据已经同步。
2:测试副本集故障转移功能
关闭Primary节点,查看其他2个节点的情况:
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T05:38:54Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 5777,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2265,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"lastHeartbeat" : ISODate("T05:38:54Z"),
"lastHeartbeatRecv" : ISODate("T05:38:53Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2179,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"lastHeartbeat" : ISODate("T05:38:54Z"),
"lastHeartbeatRecv" : ISODate("T05:38:53Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
mmm:PRIMARY& use admin
switched to db admin
mmm:PRIMARY& db.shutdownServer()
#进入任意一台:
mmm:SECONDARY& rs.status()
"set" : "mmm",
"date" : ISODate("T05:47:41Z"),
"myState" : 2,
"syncingTo" : "192.168.200.25:27017",
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"lastHeartbeat" : ISODate("T05:47:40Z"),
"lastHeartbeatRecv" : ISODate("T05:45:57Z"),
"pingMs" : 0
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5888,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"errmsg" : "syncing to: 192.168.200.25:27017",
"self" : true
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2292,
"optime" : Timestamp(, 2678),
"optimeDate" : ISODate("T05:32:56Z"),
"lastHeartbeat" : ISODate("T05:47:40Z"),
"lastHeartbeatRecv" : ISODate("T05:47:39Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
看到192.168.200.25:27017 已经从 SECONDARY 变成了 PRIMARY。具体的信息可以通过日志文件得知。继续操作:
在新主上插入:
mmm:PRIMARY& for(var i=0;i&10000;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY& db.test.count()
重启启动之前关闭的192.168.200.252:27017
mmm:SECONDARY& rs.status()
"set" : "mmm",
"date" : ISODate("T05:45:14Z"),
"myState" : 2,
"syncingTo" : "192.168.200.245:27017",
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 12,
"optime" : Timestamp(, 8187),
"optimeDate" : ISODate("T05:42:48Z"),
"errmsg" : "syncing to: 192.168.200.245:27017",
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 11,
"optime" : Timestamp(, 8187),
"optimeDate" : ISODate("T05:42:48Z"),
"lastHeartbeat" : ISODate("T05:45:13Z"),
"lastHeartbeatRecv" : ISODate("T05:45:12Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.25:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9,
"optime" : Timestamp(, 8187),
"optimeDate" : ISODate("T05:42:48Z"),
"lastHeartbeat" : ISODate("T05:45:13Z"),
"lastHeartbeatRecv" : ISODate("T05:45:13Z"),
"pingMs" : 0
启动之前的主,发现其变成了SECONDARY,在新主插入的数据,是否已经同步:
mmm:SECONDARY& db.test.count()
Tue Feb 18 13:47:03.634 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180
mmm:SECONDARY& rs.slaveOk()
mmm:SECONDARY& db.test.count()
已经同步。
所有的Secondary都宕机、或则副本集中只剩下一个节点,则该节点只能为Secondary节点,也就意味着整个集群智能进行读操作而不能进行写操作,当其他的恢复时,之前的primary节点仍然是primary节点。
当某个节点宕机后重新启动该节点会有一段的时间(时间长短视集群的数据量和宕机时间而定)导致整个集群中所有节点都成为secondary而无法进行写操作(如果应用程序没有设置相应的ReadReference也可能不能进行读取操作)。
官方推荐的最小的副本集也应该具备一个primary节点和两个secondary节点。两个节点的副本集不具备真正的故障转移能力。
1:手动切换Primary节点到自己给定的节点上面已经提到过了优先集priority,因为默认的都是1,所以只需要把给定的服务器的priority加到最大即可。让245 成为主节点,操作如下:
mmm:PRIMARY& rs.conf() #查看配置
"_id" : "mmm",
"version" : 6,
#每改变一次集群的配置,副本集的version都会加1。
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017"
"_id" : 3,
"host" : "192.168.200.25:27017"
mmm:PRIMARY& rs.status() #查看状态
"set" : "mmm",
"date" : ISODate("T07:25:51Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 47,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:25:04Z"),
"lastHeartbeat" : ISODate("T07:25:50Z"),
"lastHeartbeatRecv" : ISODate("T07:25:50Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 192.168.200.25:27017",
"syncingTo" : "192.168.200.25:27017"
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 47,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:25:04Z"),
"lastHeartbeat" : ISODate("T07:25:50Z"),
"lastHeartbeatRecv" : ISODate("T07:25:51Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 192.168.200.25:27017",
"syncingTo" : "192.168.200.25:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 13019,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:25:04Z"),
"self" : true
}mmm:PRIMARY& cfg=rs.conf() #
"_id" : "mmm",
"version" : 4,
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017"
"_id" : 3,
"host" : "192.168.200.25:27017"
mmm:PRIMARY& cfg.members[1].priority=2
#修改priority
mmm:PRIMARY& rs.reconfig(cfg) #重新加载配置文件,强制了副本集进行一次选举,优先级高的成为Primary。在这之间整个集群的所有节点都是secondary
mmm:SECONDARY& rs.status()
"set" : "mmm",
"date" : ISODate("T07:27:38Z"),
"myState" : 2,
"syncingTo" : "192.168.200.245:27017",
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 71,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:26:27Z"),
"lastHeartbeat" : ISODate("T07:27:37Z"),
"lastHeartbeatRecv" : ISODate("T07:27:38Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 192.168.200.245:27017",
"syncingTo" : "192.168.200.245:27017"
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 71,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:26:27Z"),
"lastHeartbeat" : ISODate("T07:27:37Z"),
"lastHeartbeatRecv" : ISODate("T07:27:38Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.25:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 13126,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T07:26:27Z"),
"errmsg" : "syncing to: 192.168.200.245:27017",
"self" : true
这样,给定的245服务器就成为了主节点。
2:添加仲裁节点
把25节点删除,重启。再添加让其为仲裁节点:
rs.addArb("192.168.200.25:27017")
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T08:14:36Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 795,
"optime" : Timestamp(, 100),
"optimeDate" : ISODate("T08:11:08Z"),
"lastHeartbeat" : ISODate("T08:14:35Z"),
"lastHeartbeatRecv" : ISODate("T08:14:35Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.245:27017"
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 14703,
"optime" : Timestamp(, 100),
"optimeDate" : ISODate("T08:11:08Z"),
"self" : true
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 26,
"lastHeartbeat" : ISODate("T08:14:34Z"),
"lastHeartbeatRecv" : ISODate("T08:14:34Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
mmm:PRIMARY& rs.conf()
"_id" : "mmm",
"version" : 9,
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017",
"priority" : 2
"_id" : 3,
"host" : "192.168.200.25:27017",
"arbiterOnly" : true
上面说明已经让25服务器成为仲裁节点。副本集要求参与选举投票(vote)的节点数为奇数,当我们实际环境中因为机器等原因限制只有两个(或偶数)的节点,这时为了实现 Automatic Failover引入另一类节点:仲裁者(arbiter),仲裁者只参与投票不拥有实际的数据,并且不提供任何服务,因此它对物理资源要求不严格。
通过实际测试发现,当整个副本集集群中达到50%的节点(包括仲裁节点)不可用的时候,剩下的节点只能成为secondary节点,整个集群只能读不能 写。比如集群中有1个primary节点,2个secondary节点,加1个arbit节点时:当两个secondary节点挂掉了,那么剩下的原来的 primary节点也只能降级为secondary节点;当集群中有1个primary节点,1个secondary节点和1个arbit节点,这时即使 primary节点挂了,剩下的secondary节点也会自动成为primary节点。因为仲裁节点不复制数据,因此利用仲裁节点可以实现最少的机器开 销达到两个节点热备的效果。
3:添加备份节点
hidden(成员用于支持专用功能):这样设置后此机器在读写中都不可见,并且不会被选举为Primary,但是可以投票,一般用于备份数据。
把25节点删除,重启。再添加让其为hidden节点:
mmm:PRIMARY& rs.add({"_id":3,"host":"192.168.200.25:27017","priority":0,"hidden":true})
{ "down" : [ "192.168.200.25:27017" ], "ok" : 1 }
mmm:PRIMARY& rs.conf()
"_id" : "mmm",
"version" : 17,
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017"
"_id" : 3,
"host" : "192.168.200.25:27017",
"priority" : 0,
"hidden" : true
测试其能否参与投票:关闭当前的Primary,查看是否自动转移Primary
关闭Primary(252):
mmm:PRIMARY& use admin
switched to db admin
mmm:PRIMARY& db.shutdownServer()
连另一个链接察看:
mmm:PRIMARY& rs.status()
"set" : "mmm",
"date" : ISODate("T09:11:45Z"),
"myState" : 1,
"members" : [
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" :"(not reachable/healthy)",
"uptime" : 4817,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T09:10:06Z"),
"self" : true
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "PRIMARY",
"uptime" : 401,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T09:10:06Z"),
"lastHeartbeat" : ISODate("T09:11:44Z"),
"lastHeartbeatRecv" : ISODate("T09:11:43Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 99,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T09:10:06Z"),
"lastHeartbeat" : ISODate("T09:11:44Z"),
"lastHeartbeatRecv" : ISODate("T09:11:43Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
上面说明Primary已经转移,说明hidden具有投票的权利,继续查看是否有数据复制的功能。
mmm:PRIMARY& db.test.count()
mmm:PRIMARY& for(var i=0;i&90;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY& db.test.count()
Secondady:
mmm:SECONDARY& db.test.count()
Wed Feb 19 17:18:19.469 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180
mmm:SECONDARY& rs.slaveOk()
mmm:SECONDARY& db.test.count()
上面说明hidden具有数据复制的功能
后面大家可以在上面进行备份了,后一篇会介绍如何备份、还原以及一些日常维护需要的操作。
4:添加延迟节点
Delayed(成员用于支持专用功能):可以指定一个时间延迟从primary节点同步数据。主要用于处理误删除数据马上同步到从节点导致的不一致问题。
把25节点删除,重启。再添加让其为Delayed节点:
mmm:PRIMARY& rs.add({"_id":3,"host":"192.168.200.25:27017","priority":0,"hidden":true,"slaveDelay":60})
{ "down" : [ "192.168.200.25:27017" ], "ok" : 1 }
mmm:PRIMARY& rs.conf()
"_id" : "mmm",
"version" : 19,
"members" : [
"_id" : 1,
"host" : "192.168.200.252:27017"
"_id" : 2,
"host" : "192.168.200.245:27017"
"_id" : 3,
"host" : "192.168.200.25:27017",
"priority" : 0,
"slaveDelay" : 60,
"hidden" : true
测试:操作Primary,看数据是否60s后同步到delayed节点。
mmm:PRIMARY& db.test.count()
mmm:PRIMARY& for(var i=0;i&200;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY& db.test.count()
mmm:SECONDARY& db.test.count()
mmm:SECONDARY& db.test.count()
上面说明delayed能够成功的把同步操作延迟60秒执行。除了上面的成员之外,还有: &&&
Secondary-Only:不能成为primary节点,只能作为secondary副本节点,防止一些性能不高的节点成为主节点。
Non-Voting:没有选举权的secondary节点,纯粹的备份数据节点。
具体成员信息如下:
成为primary
对客户端可见
Secondary-Only
Non-Voting
5:读写分离
MongoDB副本集对读写分离的支持是通过Read Preferences特性进行支持的,这个特性非常复杂和灵活。
应用程序驱动通过read reference来设定如何对副本集进行读取操作,默认的,客户端驱动所有的读操作都是直接访问primary节点的,从而保证了数据的严格一致性。
支持五种的read preference模式:
主节点,默认模式,读操作只在主节点,如果主节点不可用,报错或者抛出异常。
primaryPreferred
首选主节点,大多情况下读操作在主节点,如果主节点不可用,如故障转移,读操作在从节点。
从节点,读操作只在从节点, 如果从节点不可用,报错或者抛出异常。
secondaryPreferred
首选从节点,大多情况下读操作在从节点,特殊情况(如单主节点架构)读操作在主节点。
最邻近节点,读操作在最邻近的成员,可能是主节点或者从节点,关于最邻近的成员请参考
注意:2.2版本之前的MongoDB对Read Preference支持的还不完全,如果客户端驱动采用primaryPreferred实际上读取操作都会被路由到secondary节点。
因为读写分离是通过修改程序的driver的,故这里就不做说明,具体的可以参考这篇或则可以在google上查阅。
验证:(Python)
通过python来验证MongoDB ReplSet的特性。
1:主节点断开,看是否影响写入
#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:8.200.202:8.200.204:27017", replicaSet="drug",read_preference=2, safe=True)
#打印Primary服务器#print conn.primary#打印所有服务器#print conn.seeds#打印Secondary服务器#print conn.secondaries#print conn.read_preference#print conn.server_info()
for i in xrange(1000):
conn.test.tt.insert({"name":"test" + str(i)})
time.sleep(1)
print conn.primary
print conn.secondaries
脚本执行打印出的内容:
zhoujy@zhoujy:~$ python test.py
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
##Primary宕机,选举产生新Primary
set([(u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
##开启之前宕机的Primary,变成了Secondary
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
体操作如下:
在执行脚本的时候,模拟Primary宕机,再把其开启。看到其从201(Primary)上迁移到202上,201变成了Secondary。查看插入的数据发现其中间有一段数据丢失了。
{ "name" : "GOODODOO15" }
{ "name" : "GOODODOO592" }
{ "name" : "GOODODOO593" }
其实这部分数据是由于在选举过程期间丢失的,要是不允许数据丢失,则把在选举期间的数据放到队列中,等到找到新的Primary,再写入。
上面的脚本可能会出现操作时退出,这要看xrange()里的数量了,所以用一个循环修改(更直观):
#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:8.200.202:8.200.204:27017", replicaSet="drug",read_preference=2, safe=True)
#打印Primary服务器
#print conn.primary
#打印所有服务器
#print conn.seeds
#打印Secondary服务器
#print conn.secondaries
#print conn.read_preference
#print conn.server_info()
while True:
for i in xrange(100):
conn.test.tt.insert({"name":"test" + str(i)})
print "test" + str(i)
time.sleep(2)
print conn.primary
print conn.secondaries
print '\n'
上面的实验证明了:在Primary宕机的时候,程序脚本仍可以写入,不需要人为的去干预。只是期间需要10s左右(选举时间)的时间会出现不可用,进一步说明,写操作时在Primary上进行的。
2:主节点断开,看是否影响读取
#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:8.200.202:8.200.204:27017", replicaSet="drug",read_preference=2, safe=True)
#打印Primary服务器
#print conn.primary
#打印所有服务器
#print conn.seeds
#打印Secondary服务器
#print conn.secondaries
#print conn.read_preference
#print conn.server_info()
for i in xrange(1000):
time.sleep(1)
obj=conn.test.tt.find({},{"_id":0,"name":1}).skip(i).limit(1)
for item in obj:
print item.values()
print conn.primary
print conn.secondaries
脚本执行打印出的内容:
zhoujy@zhoujy:~$ python tt.py
[u'GOODODOO0']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO1']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO2']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO604']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO605']
##主宕机(201),再开启,没有影响,继续读取下一条
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'GOODODOO606']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'GOODODOO607']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'test8']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'test9']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'test10']
##主再次宕机,不开启,没有影响,继续读取下一条
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
[u'test11']
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
[u'test12']
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
具体操作如下:
在执行脚本的时候,模拟Primary宕机,再把其开启。看到201(Primary)上迁移到202上,201变成了Secondary,读取数据没有间断。再让Primary宕机,不开启,读取也不受影响。
上面的实验证明了:在Primary宕机的时候,程序脚本仍可以读取,不需要人为的去干预。一进步说明,读取是在Secondary上面。
刚接触MongoDB,能想到的就这些,后期发现一些新的知识点会不定时更新该文章。
更多信息见:
读写分离:
阅读(...) 评论()

我要回帖

更多关于 java mongdb 的文章

 

随机推荐