redis 集群 搭建(非哨兵)
环境
#environment OS: CentOS 7.2 64 #ip 192.168.21.137~139
redis 节点
目录和文件准备
#三台各自执行 mkdir /redis wget -P /redis http://download.redis.io/releases/redis-4.0.1.tar.gz #安装 cd redis-4.0.1 make && make install
节点
#通过配置文件给每台服务器配置三个redis节点
mkdir -p /redis_cluster/6001 /redis_cluster/6002 /redis_cluster/6003
cp /redis/redis-4.0.1/redis.conf /redis_cluster/6001/
cp /redis/redis-4.0.1/redis.conf /redis_cluster/6002/
cp /redis/redis-4.0.1/redis.conf /redis_cluster/6003/
#配置
#redis后台运行
daemonize yes
#pidfile文件 6001~6003
pidfile /var/run/redis_6000.pid
#6001~6003
port 7000
#开启集群
cluster-enabled yes
#配置文件首次启动自动生成
cluster-config-file nodes_6001.conf
#请求超时
cluster-node-timeout 5000
#aof日志
appendonly no
#绑定地址,需要别的机器能ping通的地址
bind 192.168.21.137启动
redis-server /redis_cluster/6001/redis.conf redis-server /redis_cluster/6002/redis.conf redis-server /redis_cluster/6003/redis.conf
检查是否启动成功及信息
[root@localhost /]# ps -ef | grep redis root 16700 1 0 23:08 ? 00:00:00 redis-server 127.0.0.1:6001 root 16705 1 0 23:08 ? 00:00:00 redis-server 127.0.0.1:6002 root 16710 1 0 23:08 ? 00:00:00 redis-server 127.0.0.1:6003 root 16715 12065 0 23:08 pts/1 00:00:00 grep --color=auto redis [root@localhost /]# netstat -tnlp | grep redis tcp 0 0 127.0.0.1:6001 0.0.0.0:* LISTEN 16700/redis-server tcp 0 0 127.0.0.1:6002 0.0.0.0:* LISTEN 16705/redis-server tcp 0 0 127.0.0.1:6003 0.0.0.0:* LISTEN 16710/redis-server [root@localhost /]#
至此,redis 节点准备完成。
搭建集群
redis 官方提供了一个 redis-trib.rb (src目录下) 工具用于搭建集群。很明显是 ruby 写的,所以需要 ruby 环境。
安装 ruby
yum -y install ruby ruby-devel rubygems rpm-build
用 gem 安装 redis 接口
[root@localhost /]# gem install redis Fetching: redis-3.3.3.gem (100%) Successfully installed redis-3.3.3 Parsing documentation for redis-3.3.3 Installing ri documentation for redis-3.3.3 1 gem installed
确保开放端口,可参考我的另一个微博 Linux 开放端口
运行 redis-trib.rb 查看 集群创建帮助
[root@localhost /]# /redis/redis-4.0.1/src/redis-trib.rb
Usage: redis-trib <command> <options> <arguments ...>
create host1:port1 ... hostN:portN
--replicas <arg>
check host:port
info host:port
fix host:port
--timeout <arg>
reshard host:port
--from <arg>
--to <arg>
--slots <arg>
--yes
--timeout <arg>
--pipeline <arg>
rebalance host:port
--weight <arg>
--auto-weights
--use-empty-masters
--timeout <arg>
--simulate
--pipeline <arg>
--threshold <arg>
add-node new_host:new_port existing_host:existing_port
--slave
--master-id <arg>
del-node host:port node_id
set-timeout host:port milliseconds
call host:port command arg arg .. arg
import host:port
--from <arg>
--copy
--replace
help (show this help)
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.创建集群
#其中 --replicas 2 意思为为每个 master 分配 2 各 slave [root@localhost ~]# /redis/redis-4.0.1/src/redis-trib.rb create --replicas 2 192.168.21.137:6001 192.168.21.137:6002 192.168.21.137:6003 192.168.21.138:6001 192.168.21.138:6002 192.168.21.138:6003 192.168.21.139:6001 192.168.21.139:6002 192.168.21.139:6003 >>> Creating cluster >>> Performing hash slots allocation on 9 nodes... Using 3 masters: 192.168.21.137:6001 192.168.21.138:6001 192.168.21.139:6001 Adding replica 192.168.21.138:6002 to 192.168.21.137:6001 Adding replica 192.168.21.139:6002 to 192.168.21.137:6001 Adding replica 192.168.21.137:6002 to 192.168.21.138:6001 Adding replica 192.168.21.137:6003 to 192.168.21.138:6001 Adding replica 192.168.21.138:6003 to 192.168.21.139:6001 Adding replica 192.168.21.139:6003 to 192.168.21.139:6001 M: 40217a88d10bb85bdd0d082df24f036435b82513 192.168.21.137:6001 slots:0-5460 (5461 slots) master S: bc847ed99feac417a066baaa48fa0748ac409c67 192.168.21.137:6002 replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 S: 840ab2d448387c801b67a958543090ce1d8ee1ba 192.168.21.137:6003 replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 M: 4a391d4987ee5f0cf342108890cf8236d56ea412 192.168.21.138:6001 slots:5461-10922 (5462 slots) master S: b977d15fbeaa3e88b830bf5b6caa22a27a2dce8f 192.168.21.138:6002 replicates 40217a88d10bb85bdd0d082df24f036435b82513 S: bd0fdf29c06fb27b2aa9eca478ec20ab9daf81da 192.168.21.138:6003 replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee M: c32b18ec7a947f01bde73a3c93abc31d60b060ee 192.168.21.139:6001 slots:10923-16383 (5461 slots) master S: e323d818a7be4149053669552230f4a98332734c 192.168.21.139:6002 replicates 40217a88d10bb85bdd0d082df24f036435b82513 S: 3bcd9cf11589f418fe5cbc1895ac5a23b0b6714e 192.168.21.139:6003 replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join.... >>> Performing Cluster Check (using node 192.168.21.137:6001) M: 40217a88d10bb85bdd0d082df24f036435b82513 192.168.21.137:6001 slots:0-5460 (5461 slots) master 2 additional replica(s) S: bc847ed99feac417a066baaa48fa0748ac409c67 192.168.21.137:6002 slots: (0 slots) slave replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 S: 3bcd9cf11589f418fe5cbc1895ac5a23b0b6714e 192.168.21.139:6003 slots: (0 slots) slave replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee S: bd0fdf29c06fb27b2aa9eca478ec20ab9daf81da 192.168.21.138:6003 slots: (0 slots) slave replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee S: 840ab2d448387c801b67a958543090ce1d8ee1ba 192.168.21.137:6003 slots: (0 slots) slave replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 M: c32b18ec7a947f01bde73a3c93abc31d60b060ee 192.168.21.139:6001 slots:10923-16383 (5461 slots) master 2 additional replica(s) S: b977d15fbeaa3e88b830bf5b6caa22a27a2dce8f 192.168.21.138:6002 slots: (0 slots) slave replicates 40217a88d10bb85bdd0d082df24f036435b82513 M: 4a391d4987ee5f0cf342108890cf8236d56ea412 192.168.21.138:6001 slots:5461-10922 (5462 slots) master 2 additional replica(s) S: e323d818a7be4149053669552230f4a98332734c 192.168.21.139:6002 slots: (0 slots) slave replicates 40217a88d10bb85bdd0d082df24f036435b82513 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
测试集群
#在 192.168.21.138:6001 机器上设置一个键值对 testKey--testValue ,因为绑定了 IP ,所以 h 参数不能省略 [root@localhost ~]# redis-cli -h 192.168.21.138 -c -p 6001 192.168.21.138:6001> set testKey testValue -> Redirected to slot [5203] located at 192.168.21.137:6001 OK 192.168.21.137:6001> get testKey "testValue" 192.168.21.137:6001> . . . #在 192.168.21.137:6003 上获取 testKey 的值,发现自动定向到 192.168.21.137:6001,说明成功 [root@localhost ~]# redis-cli -h 192.168.21.137 -c -p 6003 192.168.21.137:6003> get testKey -> Redirected to slot [5203] located at 192.168.21.137:6001 "testValue" 192.168.21.137:6001> . . . #在 192.168.21.139:6002 上获取 testKey 的值,发现自动定向到 192.168.21.137:6001,说明成功 [root@localhost ~]# redis-cli -h 192.168.21.139 -c -p 6002 192.168.21.139:6002> get testKey -> Redirected to slot [5203] located at 192.168.21.137:6001 "testValue" 192.168.21.137:6001>
至此,Redis 集群搭建完成。
集群操作
检查状态
[root@localhost ~]# /redis/redis-4.0.1/src/redis-trib.rb check 192.168.21.139:6002 >>> Performing Cluster Check (using node 192.168.21.139:6002) S: e323d818a7be4149053669552230f4a98332734c 192.168.21.139:6002 slots: (0 slots) slave replicates 40217a88d10bb85bdd0d082df24f036435b82513 S: 840ab2d448387c801b67a958543090ce1d8ee1ba 192.168.21.137:6003 slots: (0 slots) slave replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 S: bd0fdf29c06fb27b2aa9eca478ec20ab9daf81da 192.168.21.138:6003 slots: (0 slots) slave replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee M: c32b18ec7a947f01bde73a3c93abc31d60b060ee 192.168.21.139:6001 slots:10923-16383 (5461 slots) master 2 additional replica(s) S: b977d15fbeaa3e88b830bf5b6caa22a27a2dce8f 192.168.21.138:6002 slots: (0 slots) slave replicates 40217a88d10bb85bdd0d082df24f036435b82513 S: bc847ed99feac417a066baaa48fa0748ac409c67 192.168.21.137:6002 slots: (0 slots) slave replicates 4a391d4987ee5f0cf342108890cf8236d56ea412 M: 4a391d4987ee5f0cf342108890cf8236d56ea412 192.168.21.138:6001 slots:5461-10922 (5462 slots) master 2 additional replica(s) M: 40217a88d10bb85bdd0d082df24f036435b82513 192.168.21.137:6001 slots:0-5460 (5461 slots) master 2 additional replica(s) S: 3bcd9cf11589f418fe5cbc1895ac5a23b0b6714e 192.168.21.139:6003 slots: (0 slots) slave replicates c32b18ec7a947f01bde73a3c93abc31d60b060ee [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
查看所有节点信息
[root@localhost ~]# redis-cli -h 192.168.21.138 -c -p 6001 192.168.21.138:6001> cluster nodes e323d818a7be4149053669552230f4a98332734c 192.168.21.139:6002@16002 slave 40217a88d10bb85bdd0d082df24f036435b82513 0 1503628318000 8 connected bc847ed99feac417a066baaa48fa0748ac409c67 192.168.21.137:6002@16002 slave 4a391d4987ee5f0cf342108890cf8236d56ea412 0 1503628318620 4 connected c32b18ec7a947f01bde73a3c93abc31d60b060ee 192.168.21.139:6001@16001 master - 0 1503628317000 7 connected 10923-16383 bd0fdf29c06fb27b2aa9eca478ec20ab9daf81da 192.168.21.138:6003@16003 slave c32b18ec7a947f01bde73a3c93abc31d60b060ee 0 1503628318000 7 connected 40217a88d10bb85bdd0d082df24f036435b82513 192.168.21.137:6001@16001 master - 0 1503628318117 1 connected 0-5460 4a391d4987ee5f0cf342108890cf8236d56ea412 192.168.21.138:6001@16001 myself,master - 0 1503628317000 4 connected 5461-10922 840ab2d448387c801b67a958543090ce1d8ee1ba 192.168.21.137:6003@16003 slave 4a391d4987ee5f0cf342108890cf8236d56ea412 0 1503628318519 4 connected 3bcd9cf11589f418fe5cbc1895ac5a23b0b6714e 192.168.21.139:6003@16003 slave c32b18ec7a947f01bde73a3c93abc31d60b060ee 0 1503628317111 9 connected b977d15fbeaa3e88b830bf5b6caa22a27a2dce8f 192.168.21.138:6002@16002 slave 40217a88d10bb85bdd0d082df24f036435b82513 0 1503628317111 5 connected 192.168.21.138:6001> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:9 cluster_size:3 cluster_current_epoch:9 cluster_my_epoch:4 cluster_stats_messages_ping_sent:4391 cluster_stats_messages_pong_sent:4314 cluster_stats_messages_meet_sent:2 cluster_stats_messages_sent:8707 cluster_stats_messages_ping_received:4308 cluster_stats_messages_pong_received:4393 cluster_stats_messages_meet_received:6 cluster_stats_messages_received:8707 192.168.21.138:6001>
节点命令
#将 ip 和 port 所在的节点添加到集群当中 cluster meet <ip> <port> #从集群中移除 node_id 指定的节点。 cluster forget <node_id> #将当前结点设置为 node_id 的 slave(从节点) cluster replicate <node_id> #保存节点配置文件 cluster saveconfig
槽命令
#将一个或多个槽( slot)指派( assign)给当前节点。 cluster addslots <slot> [slot ...] #移除一个或多个槽对当前节点的指派。 cluster delslots <slot> [slot ...] #移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。 cluster flushslots #将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。 cluster setslot <slot> node <node_id> #将本节点的槽 slot 迁移到 node_id 指定的节点中。 cluster setslot <slot> migrating <node_id> #从 node_id 指定的节点中导入槽 slot 到本节点。 cluster setslot <slot> importing <node_id> #取消对槽 slot 的导入( import)或者迁移( migrate)。 cluster setslot <slot> stable
键
#计算键 key 应该被放置在哪个槽上。 cluster keyslot <key> #返回槽 slot 目前包含的键值对数量。 cluster countkeysinslot <slot> #返回 count 个 slot 槽中的键 。 cluster getkeysinslot <slot> <count>
脚本
启动节点
#!/bin/bash
for i in 1 2 3
do
/redis/redis-4.0.1/src/redis-server /redis_cluster/600$i/redis.conf;
done启动集群
#!/bin/bash /redis/redis-4.0.1/src/redis-trib.rb create --replicas 2 192.168.21.137:6001 192.168.21.137:6002 192.168.21.137:6003 192.168.21.138:6001 192.168.21.138:6002 192.168.21.138:6003 192.168.21.139:6001 192.168.21.139:6002 192.168.21.139:6003
关闭
#!/bin/bash
for i in 137 138 139
do
for j in 1 2 3
do
/redis/redis-4.0.1/src/redis-cli -c -h 192.168.21.$i -p 600$j shutdown;
done
done