OpenStack安装流程(juno版)- 添加对象存储服务(swift)- 安装和配置

在controller节点上安装和配置

创建swift的数据库,服务证书和API端点

  1. 创建服务证书:

    启动admin证书:
    $ source admin-openrc.sh

    创建swift用户:
    <pre>$ keystone user-create --name swift --pass SWIFT_PASS

PropertyValue
email
enabledTrue
iddcf5d53f027b44d38c205ad06717812c
nameswift
usernameswift

+----------+----------------------------------+</pre>
用合适的密码代替SWIFT_PASS。

admin角色赋予给swift用户:
$ keystone user-role-add --user swift --tenant service --role admin
这条命令不产生输出显示。

创建swift服务实体:
<pre>$ keystone service-create --name swift --type object-store \

--description "OpenStack Object Storage"
PropertyValue
descriptionOpenStack Object Storage
enabledTrue
id11519978722e4fb4be75f086aca49334
nameswift
typeobject-store

+-------------+----------------------------------+</pre>

  1. 创建对象存储服务的API端点:
    <pre>$ keystone endpoint-create \

--service-id $(keystone service-list | awk '/ object-store / {print$2}') \
--publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://controller:8080 \

--region regionOne
PropertyValue
adminurlhttp://controller:8080
idec003b88a6144afda3fc2b34acb93ded
internalurlhttp://controller:8080/v1/AUTH_%(tenant_id)s
publicurlhttp://controller:8080/v1/AUTH_%(tenant_id)s
regionregionOne
service_id11519978722e4fb4be75f086aca49334

+-------------+----------------------------------------------+</pre>

安装和配置组件

  1. 安装所需包:
    # apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
  2. 创建文件夹/etc/swift
  3. 从对象存储的源码仓库中取得代理服务配置文件(proxy service configuration file)。
    # curl -o /etc/swift/proxy-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
  4. 编辑# vi /etc/swift/proxy-server.conf文件:

    [DEFAULT]部分,设置bind port,用户和配置文件存放目录:
    <pre>[DEFAULT]

...
bind_port = 8080
user = swift
swift_dir = /etc/swift</pre>

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server</pre>

[app:proxy-server]部分,启用帐户管理:
<pre>[app:proxy-server]<br>...<br>allow_account_management = true<br>account_autocreate = true</pre>

[filter:keystoneauth]部分,设定操作者角色(operator role):
<pre>[filter:keystoneauth]<br>use = egg:swift#keystoneauth<br>...<br>operator_roles = admin,_member_</pre>

[filter:authtoken]部分,设定认证服务:
<pre>[filter:authtoken]<br>paste.filter_factory = keystonemiddleware.auth_token:filter_factory<br>...<br>auth_uri = <a href="http://controller" rel="nofollow noreferrer">http://controller</a>:5000/v2.0<br>identity_uri = <a href="http://controller" rel="nofollow noreferrer">http://controller</a>:35357<br>admin_tenant_name = service<br>admin_user = swift<br>admin_password = SWIFT_PASS<br>delay_auth_decision = true</pre>
SWIFT_PASS为创建swift用户时使用的密码。注释掉 auth_host,auth_port,和auth_protocol的选项,因为identity_uri选项是直接代替它们的。

[filter:cache]部分,设定memcached location:
<pre>[filter:cache]<br>...<br>memcache_servers = 127.0.0.1:11211</pre>

在object节点上安装和配置

这里要求两个object存储节点,每一个都包含两个空的本地块存储设备(two empty local block storage devices)。每个设备(/dev/sdb/dev/sdc)都必须包含一个合适的分区表,整个设备就一个分区(Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. )。

配置存储

为object节点增添两块硬盘:设置->存储->控制器:SATA->添加虚拟硬盘。

object节点的基础环境配置

由前文所述的虚拟机模版创建两个存储节点,分别为object1和object2节点,基础环境配置如下:

配置网络

object节点虚拟机网络设置,设置->网络:

  1. 网卡1,连接方式->仅主机(Host-Only)适配器,界面名称->VirtualBox Host-Only Ethernet Adapter #2,控制芯片->准虚拟化网络(virtio-net),混杂模式->全部允许,接入网线->勾选;
  2. 网卡2,连接方式->网络地址转换(NAT),控制芯片->准虚拟化网络(virtio-net),接入网线->勾选。

启动虚拟机后,配置其网络,通过更改# vi /etc/network/interfaces文件,添加如下代码:
object1:
<pre># The management network interface<br>auto eth0<br>iface eth0 inet static

address 10.10.10.14
netmask 255.255.255.0

The NAT network

auto eth1
iface eth1 inet dhcp</pre>
object2:
<pre># The management network interface<br>auto eth0<br>iface eth0 inet static

address 10.10.10.15
netmask 255.255.255.0

The NAT network

auto eth1
iface eth1 inet dhcp</pre>

配置命名的解决方案,更改# vi /etc/hostname文件,将主机名改为object1object2,更改# vi /etc/hosts文件,添加以下代码:
<pre>10.10.10.10 controller<br>10.10.10.11 compute<br>10.10.10.12 network<br>10.10.10.13 block<br>10.10.10.14 object1<br>10.10.10.15 object2</pre>

配置NTP

修改配置文件# vi /etc/ntp.conf,添加如下代码:

<pre>server controller iburst</pre>
其他server全部都注释掉。如果/var/lib/ntp/ntp.conf.dhcp文件存在,则删除之。

重启NTP服务:# service ntp restart

配置存储

  1. 安装配套的功能包

# apt-get install xfsprogs rsync

  1. 格式化分区/dev/sdb/dev/sdc为XFS格式:
    <pre># mkfs.xfs /dev/sdb

meta-data=/dev/sdb isize=256 agcount=4, agsize=524288 blks

=                       sectsz=512   attr=2, projid32bit=0

data = bsize=4096 blocks=2097152, imaxpct=25

=                       sunit=0      swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2

=                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0</pre>
<pre># mkfs.xfs /dev/sdc<br>meta-data=/dev/sdc isize=256 agcount=4, agsize=524288 blks

=                       sectsz=512   attr=2, projid32bit=0

data = bsize=4096 blocks=2097152, imaxpct=25

=                       sunit=0      swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2

=                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0></pre>

  1. 创建挂载点的目录结构

# mkdir -p /srv/node/sdb
# mkdir -p /srv/node/sdc

  1. 编辑# vi /etc/fstab文件,添加如下内容:

    <pre>/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2</pre>

  1. 挂载设备

# mount /srv/node/sdb
# mount /srv/node/sdc

  1. 新建# vi /etc/rsyncd.conf文件,添加如下内容:
    <pre>uid = swift

gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

  1. 编辑# vi /etc/default/rsync文件,启用rsync服务:
    <pre>RSYNC_ENABLE=true</pre>
  2. 重启rsync服务:

# service rsync start

安装和配置存储节点组件

  1. 安装所需包:

# apt-get install swift swift-account swift-container swift-object

  1. 从对象存储的源码仓库中取得accounting,container和object服务配置文件:

# curl -o /etc/swift/account-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
# curl -o /etc/swift/container-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
# curl -o /etc/swift/object-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>

  1. 编辑# vi /etc/swift/account-server.conf文件:
    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon account-server</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift</pre>

  1. 编辑# vi /etc/swift/container-server.conf文件:
    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon container-server</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift</pre>

  1. 编辑# vi /etc/swift/object-server.conf文件:

    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon object-server</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift</pre>

  1. 确保挂载点目录结构的权限正确:

# chown -R swift:swift /srv/node

  1. 创建recon目录,并确保权限正确:

# mkdir -p /var/cache/swift
# chown -R swift:swift /var/cache/swift

相关推荐