Consul 搭建集群

Consul 搭建集群

Consul 是一套开源的分布式服务发现和配置管理系统,由 HashiCorp 公司用 Go 语言开发。
它具有很多优点。包括: 基于raft协议,简洁,支持健康检查, 同时支持HTTP和DNS协议,支持跨数据中心的WAN集群,提供图形界面,跨平台支持 Linux、Mac、Windows。

快速入门

环境准备

首先需要准备好三台虚拟机,由三台虚机组成一个consul集群。

ip
s1172.20.20.20
s2172.20.20.21
s3172.20.20.22

基于Vagrant技术,可以快速的配置出三台虚机。

在命令行输入

» mkdir ms
» cd ms
» vagrant init centos/7

编辑Vagrantfile并配置出三台虚机。

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.box = "centos/7"

  config.vm.define "s1" do |s1|
      s1.vm.hostname = "s1"
      s1.vm.network "private_network", ip: "172.20.20.20"
  end

  config.vm.define "s2" do |s2|
      s2.vm.hostname = "s2"
      s2.vm.network "private_network", ip: "172.20.20.21"
  end

  config.vm.define "s3" do |s3|
      s3.vm.hostname = "s3"
      s3.vm.network "private_network", ip: "172.20.20.22"
  end    

end

启动虚机

» vagrant up
Bringing machine 's1' up with 'virtualbox' provider...
Bringing machine 's2' up with 'virtualbox' provider...
Bringing machine 's3' up with 'virtualbox' provider...
==> s1: Importing base box 'centos/7'...
==> s1: Matching MAC address for NAT networking...
==> s1: Setting the name of the VM: ms_s1_1528794737477_2031
==> s1: Clearing any previously set network interfaces...
==> s1: Preparing network interfaces based on configuration...
    s1: Adapter 1: nat
    s1: Adapter 2: hostonly
==> s1: Forwarding ports...
    s1: 22 (guest) => 2222 (host) (adapter 1)
==> s1: Booting VM...
==> s1: Waiting for machine to boot. This may take a few minutes...
    s1: SSH address: 127.0.0.1:2222
    s1: SSH username: vagrant
    s1: SSH auth method: private key
    s1:
    s1: Vagrant insecure key detected. Vagrant will automatically replace
    s1: this with a newly generated keypair for better security.
    s1:
    s1: Inserting generated public key within guest...
    s1: Removing insecure key from the guest if it's present...
    s1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s1: Machine booted and ready!
==> s1: Checking for guest additions in VM...
    s1: No guest additions were detected on the base box for this VM! Guest
    s1: additions are required for forwarded ports, shared folders, host only
    s1: networking, and more. If SSH fails on this machine, please install
    s1: the guest additions and repackage the box to continue.
    s1:
    s1: This is not an error message; everything may continue to work properly,
    s1: in which case you may ignore this message.
==> s1: Setting hostname...
==> s1: Configuring and enabling network interfaces...
    s1: SSH address: 127.0.0.1:2222
    s1: SSH username: vagrant
    s1: SSH auth method: private key
==> s1: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant
==> s2: Importing base box 'centos/7'...
==> s2: Matching MAC address for NAT networking...
==> s2: Setting the name of the VM: ms_s2_1528794795606_2999
==> s2: Fixed port collision for 22 => 2222. Now on port 2200.
==> s2: Clearing any previously set network interfaces...
==> s2: Preparing network interfaces based on configuration...
    s2: Adapter 1: nat
    s2: Adapter 2: hostonly
==> s2: Forwarding ports...
    s2: 22 (guest) => 2200 (host) (adapter 1)
==> s2: Booting VM...
==> s2: Waiting for machine to boot. This may take a few minutes...
    s2: SSH address: 127.0.0.1:2200
    s2: SSH username: vagrant
    s2: SSH auth method: private key
    s2:
    s2: Vagrant insecure key detected. Vagrant will automatically replace
    s2: this with a newly generated keypair for better security.
    s2:
    s2: Inserting generated public key within guest...
    s2: Removing insecure key from the guest if it's present...
    s2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s2: Machine booted and ready!
==> s2: Checking for guest additions in VM...
    s2: No guest additions were detected on the base box for this VM! Guest
    s2: additions are required for forwarded ports, shared folders, host only
    s2: networking, and more. If SSH fails on this machine, please install
    s2: the guest additions and repackage the box to continue.
    s2:
    s2: This is not an error message; everything may continue to work properly,
    s2: in which case you may ignore this message.
==> s2: Setting hostname...
==> s2: Configuring and enabling network interfaces...
    s2: SSH address: 127.0.0.1:2200
    s2: SSH username: vagrant
    s2: SSH auth method: private key
==> s2: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant
==> s3: Importing base box 'centos/7'...
==> s3: Matching MAC address for NAT networking...
==> s3: Setting the name of the VM: ms_s3_1528794863986_43122
==> s3: Fixed port collision for 22 => 2222. Now on port 2201.
==> s3: Clearing any previously set network interfaces...
==> s3: Preparing network interfaces based on configuration...
    s3: Adapter 1: nat
    s3: Adapter 2: hostonly
==> s3: Forwarding ports...
    s3: 22 (guest) => 2201 (host) (adapter 1)
==> s3: Booting VM...
==> s3: Waiting for machine to boot. This may take a few minutes...
    s3: SSH address: 127.0.0.1:2201
    s3: SSH username: vagrant
    s3: SSH auth method: private key
    s3:
    s3: Vagrant insecure key detected. Vagrant will automatically replace
    s3: this with a newly generated keypair for better security.
    s3:
    s3: Inserting generated public key within guest...
    s3: Removing insecure key from the guest if it's present...
    s3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s3: Machine booted and ready!
==> s3: Checking for guest additions in VM...
    s3: No guest additions were detected on the base box for this VM! Guest
    s3: additions are required for forwarded ports, shared folders, host only
    s3: networking, and more. If SSH fails on this machine, please install
    s3: the guest additions and repackage the box to continue.
    s3:
    s3: This is not an error message; everything may continue to work properly,
    s3: in which case you may ignore this message.
==> s3: Setting hostname...
==> s3: Configuring and enabling network interfaces...
    s3: SSH address: 127.0.0.1:2201
    s3: SSH username: vagrant
    s3: SSH auth method: private key
==> s3: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant

至此3台待测试的虚机就准备完成了

单机安装

登录到虚机s1,切换用户到root

» vagrant ssh s1
[vagrant@s1 ~]$ su
Password:
[root@s1 vagrant]#

安装一些依赖的工具

[root@s1 vagrant]# yum install -y epel-release
[root@s1 vagrant]# yum install -y jq
[root@s1 vagrant]# yum install -y unzip

下载1.1.0版本到/tmp目录下

[root@s1 vagrant]# cd /tmp/
[root@s1 tmp]# curl -s https://releases.hashicorp.com/consul/1.1.0/consul_1.1.0_linux_amd64.zip -o consul.zip

解压,并赋予consul可执行权限,最后把consul移动到/usr/bin/下

[root@s1 tmp]unzip consul.zip
[root@s1 tmp]chmod +x consul
[root@s1 tmp]mv consul /usr/bin/consul

检查consul是否安装成功

[root@s1 tmp]# consul
Usage: consul [--version] [--help] <command> [<args>]

Available commands are:
    agent          Runs a Consul agent
    catalog        Interact with the catalog
    event          Fire a new event
    exec           Executes a command on Consul nodes
    force-leave    Forces a member of the cluster to enter the "left" state
    info           Provides debugging information for operators.
    join           Tell Consul agent to join cluster
    keygen         Generates a new encryption key
    keyring        Manages gossip layer encryption keys
    kv             Interact with the key-value store
    leave          Gracefully leaves the Consul cluster and shuts down
    lock           Execute a command holding a lock
    maint          Controls node or service maintenance mode
    members        Lists the members of a Consul cluster
    monitor        Stream logs from a Consul agent
    operator       Provides cluster-level tools for Consul operators
    reload         Triggers the agent to reload configuration files
    rtt            Estimates network round trip time between nodes
    snapshot       Saves, restores and inspects snapshots of Consul server state
    validate       Validate config files/directories
    version        Prints the Consul version
    watch          Watch for changes in Consul

出现如上所示代表安装成功。

批量安装

这里只是s1虚机安装完成,还有s2和s3需要再次的安装,比较费事。还好 Vagrant 提供了一个 shell 脚本的支持。

稍微对Vagrantfile文件做的修改,在第4行增加script脚本的定义

$script = <<SCRIPT

echo "Installing dependencies ..."
yum install -y epel-release
yum install -y jq
yum install -y unzip

echo "Determining Consul version to install ..."
CHECKPOINT_URL="https://checkpoint-api.hashicorp.com/v1/check"
if [ -z "$CONSUL_DEMO_VERSION" ]; then
    CONSUL_DEMO_VERSION=$(curl -s "${CHECKPOINT_URL}"/consul | jq .current_version | tr -d '"')
fi

echo "Fetching Consul version ${CONSUL_DEMO_VERSION} ..."
cd /tmp/
curl -s https://releases.hashicorp.com/consul/${CONSUL_DEMO_VERSION}/consul_${CONSUL_DEMO_VERSION}_linux_amd64.zip -o consul.zip

echo "Installing Consul version ${CONSUL_DEMO_VERSION} ..."
unzip consul.zip
sudo chmod +x consul
sudo mv consul /usr/bin/consul

SCRIPT

在指定box的下一行 增加执行script脚本

config.vm.box = "centos/7"
config.vm.provision "shell", inline: $script

销毁虚机并重新初始化启动

» vagrant destroy
    s3: Are you sure you want to destroy the 's3' VM? [y/N] y
==> s3: Forcing shutdown of VM...
==> s3: Destroying VM and associated drives...
    s2: Are you sure you want to destroy the 's2' VM? [y/N] y
==> s2: Forcing shutdown of VM...
==> s2: Destroying VM and associated drives...
    s1: Are you sure you want to destroy the 's1' VM? [y/N] y
==> s1: Forcing shutdown of VM...
==> s1: Destroying VM and associated drives...
» vagrant up
...

安装过程略长,等待脚本执行完成后登录s1,s2,s3 执行consul验证是否安装成功

启动 Agent

这里需要解释consul一些基本概念和启动参数

基本概念

  • agent 组成 consul 集群的每个成员上都要运行一个 agent,可以通过 consul agent 命令来启动。agent 可以运行在 server 状态或者 client 状态。自然的,运行在 server 状态的节点被称为 server 节点;运行在 client 状态的节点被称为 client 节点。
  • client consul的client模式,就是客户端模式。是consul节点的一种模式,这种模式下,所有注册到当前节点的服务会被转发到server,本身是不持久化这些信息。
  • server 表示consul的server模式,表明这个consul是个server,这种模式下,功能和client都一样,唯一不同的是,它会把所有的信息持久化的本地,这样遇到故障,信息是可以被保留的。

启动参数

  • bootstrap-expect 集群期望的节点数,只有节点数量达到这个值才会选举leader
  • server 运行在server模式
  • data-dir 指定数据目录,其他的节点对于这个目录必须有读的权限
  • node 指定节点的名称
  • bind 为该节点绑定一个地址
  • config-dir 指定配置文件,定义服务的,默认所有一.json结尾的文件都会读
  • enable-script-checks=true 设置检查服务为可用
  • datacenter 数据中心名称,
  • join 加入到已有的集群中
  • ui 使用自带的ui
  • client 指定web ui、的监听地址,默认127.0.0.1只能本机访问,改为0.0.0.0可外网访问

先配置一个单机节点启动,切换用户为root

[root@s1 vagrant]# consul agent -server -bootstrap-expect 1 -data-dir /etc/consul.d -node=node1 -bind=172.20.20.20 -ui -client 0.0.0.0

打开浏览器输入http://172.20.20.20:8500/显示web界面代表启动成功

Consul 搭建集群

搭建服务集群

登录虚机s1,切换用户为root,启动consul 设置节点数为3

[root@s1 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node1 -bind=172.20.20.20 -ui -client 0.0.0.0

登录虚机s2,切换用户为root,启动consul 设置节点数为3 并加入到s1中,注意 node名称不能重复

[root@s2 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node2 -bind=172.20.20.21 -ui -client 0.0.0.0 -join 172.20.20.20

登录虚机s3,重复在s2中的操作

[root@s3 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node3 -bind=172.20.20.22 -ui -client 0.0.0.0 -join 172.20.20.20

这时再刷新web界面就能看到三个节点。

Consul 搭建集群

至此,集群搭建完成。

进阶操作

集群成员

在虚机s1的另一个终端上运行consul members,可以看到Consul集群的成员。

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  alive   server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

查询节点

安装dig工具

[root@s1 vagrant]# yum install -y bind-utils
[root@s1 vagrant]# dig @172.20.20.20 -p 8600 node2.node.consul

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @172.20.20.20 -p 8600 node2.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38194
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node2.node.consul.             IN      A

;; ANSWER SECTION:
node2.node.consul.      0       IN      A       172.20.20.21

;; Query time: 33 msec
;; SERVER: 172.20.20.20#8600(172.20.20.20)
;; WHEN: Tue Jun 12 15:49:53 UTC 2018
;; MSG SIZE  rcvd: 62

离开集群

要离开集群,可以正常退出代理(使用Ctrl-C)。

在虚机s2上按Ctrl-C,在虚机s1上再次查询集群成员。

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  failed  server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

可以看到node2的状态为failed。

重新在虚机s2上启动Agent。在s2的另一个终端上输入

[root@s2 vagrant]# consul leave
Graceful leave complete

在虚机s1上查询集群成员

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  left    server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

node2的状态为left,离开集群成功!

相关推荐