现在公司内部的openstack-deploy项目还不支持multi-region部署,不过还是可以配置的。

multi-region大致意思就是两套openstack环境用一个keystone、dashboard。也就是说

regionTwo的openstack要用regionOne的keystone来认证。

步骤:

1、首先我用自己的openstack-deploy脚本推出两套openstack环境,都是高可用环境

2、确认两套环境的主机名能够相互解析

[root@SHBAK0801ctrl keystone(keystone_admin)]# cat /etc/hosts  # 因为都是高可用环境,所有有两个vip172.16.20.20        ceph02172.16.20.19        ceph01172.16.120.250      controller172.16.20.250       controller_vip172.16.120.10       SHBAK0801ctrl172.16.120.11       SHBAK0802ctrl

3、停止或卸载掉第二套环境的所有keystone、httpd服务

4、在第一套环境上创建service、endpoint、region是regionTwo(keystone不需要再创建service,因为两套环境共用一个keystone、dashboard),这里可以利用/srv/openstack-deploy/salt/dev/openstack/keystone/user-role-tenant.sls这个脚本来做,修改内容:

[root@SHBAK0801ctrl keystone(keystone_admin)]# cat user-role-tenant.sls{% if salt['pillar.get']('config_ha_install',False) %}{% set vip_hostname = salt['pillar.get']('basic:pacemaker:VIP_HOSTNAME') %}{% else %}{% set vip_hostname = grains['host'] %}{% endif %}keystone-tenants:  keystone.tenant_present:    - names:      - admin      - servicekeystone-roles:  keystone.role_present:    - names:      - admin      - Member      - workflow{% if salt['pillar.get']('config_heat_install',False) %}      - heat_stack_user      - heat_stack_owner{% endif %}{
{ salt['pillar.get']('keystone:ADMIN_USER','admin') }}:  keystone.user_present:    - password: "{
{ salt['pillar.get']('keystone:ADMIN_PASS','admin') }}"    - email: admin@domain.com    - roles:      - admin:        - admin      - service:        - admin        - Member    - require:      - keystone: keystone-tenants      - keystone: keystone-roles{% set user_list = ['glance','nova','neutron','cinder','heat','ceilometer'] %}  {% for user in user_list %}{% if salt['pillar.get']('config_' + user + '_install',True) %}`user`:  keystone.user_present:{% if user == 'glance' %}    - name: {
{ salt['pillar.get']('glance:AUTH_ADMIN_GLANCE_USER') }}    - password: "{
{ salt['pillar.get']('glance:AUTH_ADMIN_GLANCE_PASS') }}"{% elif user == 'nova' %}    - name: {
{ salt['pillar.get']('nova:AUTH_ADMIN_NOVA_USER') }}    - password: "{
{ salt['pillar.get']('nova:AUTH_ADMIN_NOVA_PASS') }}"{% elif user == 'neutron' %}    - name: {
{ salt['pillar.get']('neutron:AUTH_ADMIN_NEUTRON_USER') }}    - password: "{
{ salt['pillar.get']('neutron:AUTH_ADMIN_NEUTRON_PASS') }}"{% elif user == 'cinder' %}    - name: {
{ salt['pillar.get']('cinder:AUTH_ADMIN_CINDER_USER') }}    - password: "{
{ salt['pillar.get']('cinder:AUTH_ADMIN_CINDER_PASS') }}"{% elif user == 'ceilometer' %}    - name: {
{ salt['pillar.get']('ceilometer:AUTH_ADMIN_CEILOMETER_USER') }}    - password: "{
{ salt['pillar.get']('ceilometer:AUTH_ADMIN_CEILOMETER_PASS') }}"{% elif user == 'heat' %}    - name: {
{ salt['pillar.get']('heat:AUTH_ADMIN_HEAT_USER') }}    - password: "{
{ salt['pillar.get']('heat:AUTH_ADMIN_HEAT_PASS') }}"{% endif %}    - email: `user`@domain.com    - tenant: service    - roles:      - service:        - admin    - require:      - keystone: keystone-tenants      - keystone: keystone-roles{% endif %}{% endfor %}{% set service_list = ['glance','nova','neutron','cinder','cinderv2','heat','ceilometer'] %} # 此处修改{% for srv in service_list %}{% if salt['pillar.get']('config_' + srv1 + '_install',True) %}`srv`-srv:  keystone.service_present:    - name: `srv`-srv{% if srv == 'keystone' %}    - service_type: identity    - description: Keystone Identity Service{% elif srv == 'glance' %}    - service_type: p_w_picpath    - description: Glance Image Service{% elif srv == 'nova' %}    - service_type: compute    - description: Nova Compute Service{% elif srv == 'neutron' %}    - service_type: network    - description: Neutron Network Service{% elif srv == 'cinder' %}    - service_type: volume    - description: cinder Volume Service{% elif srv == 'cinderv2' %}    - service_type: volumev2    - description: cinder Volume Service{% elif srv == 'ceilometer' %}    - service_type: metering    - description: Telemetry Service{% elif srv == 'heat' %}    - service_type: orchestration    - description: Orchestration Service{% endif %}{% endif %}{% endfor %}{% for srv in service_list %}{% if salt['pillar.get']('config_' + srv1 + '_install',True) %}`srv`-endpoint:  keystone.endpoint_present:    - name: `srv`-srv     {% if srv == 'keystone' %}    - publicurl:   http://{
{ 'controller_vip' }}:5000/v2.0         # 此处修改,以下都是    - internalurl: http://{
{ 'controller_vip' }}:5000/v2.0    - adminurl:    http://{
{ 'controller_vip' }}:35357/v2.0{% elif srv == 'glance' %}    - publicurl:   http://{
{ 'controller_vip' }}:9292    - internalurl: http://{
{ 'controller_vip' }}:9292    - adminurl:    http://{
{ 'controller_vip' }}:9292{% elif srv == 'nova' %}    - publicurl:   http://{
{ 'controller_vip' }}:8774/v2/%(tenant_id)s    - internalurl: http://{
{ 'controller_vip' }}:8774/v2/%(tenant_id)s    - adminurl:    http://{
{ 'controller_vip' }}:8774/v2/%(tenant_id)s{% elif srv == 'neutron' %}    - publicurl:   http://{
{ 'controller_vip' }}:9696    - internalurl: http://{
{ 'controller_vip' }}:9696    - adminurl:    http://{
{ 'controller_vip' }}:9696{% elif srv == 'cinder' %}    - publicurl:    http://{
{ 'controller_vip' }}:8776/v1/%(tenant_id)s    - internalurl:  http://{
{ 'controller_vip' }}:8776/v1/%(tenant_id)s    - adminurl:     http://{
{ 'controller_vip' }}:8776/v1/%(tenant_id)s{% elif srv == 'cinderv2' %}    - publicurl:    http://{
{ 'controller_vip' }}:8776/v2/%(tenant_id)s    - internalurl:  http://{
{ 'controller_vip' }}:8776/v2/%(tenant_id)s    - adminurl:     http://{
{ 'controller_vip' }}:8776/v2/%(tenant_id)s{% elif srv == 'ceilometer' %}    - publicurl:    http://{
{ 'controller_vip' }}:8777    - internalurl:  http://{
{ 'controller_vip' }}:8777    - adminurl:     http://{
{ 'controller_vip' }}:8777{% elif srv == 'heat' %}    - publicurl:    http://{
{ 'controller_vip' }}:8004/v1/%(tenant_id)s    - internalurl:  http://{
{ 'controller_vip' }}:8004/v1/%(tenant_id)s    - adminurl:     http://{
{ 'controller_vip' }}:8004/v1/%(tenant_id)s{% endif %}    - region: regionTwo{% endif %}{% endfor %}# 在第一套环境上执行salt 'SHBAK0801ctrl' state.sls dev.openstack.keystone.user-role-tenant -l debug (报错就跑几次就好)[root@SHBAK0801ctrl ~(keystone_admin)]# keystone service-list+----------------------------------+----------------+---------------+---------------------------+|                id                |      name      |      type     |        description        |+----------------------------------+----------------+---------------+---------------------------+| 16bf05c877414ab9af01016c53af226e |   ceilometer   |    metering   |     Telemetry Service     || c22695a4c2b54d89a5971b67e1877289 | ceilometer-srv |    metering   |     Telemetry Service     || 67443adbe4c947419e67e63a2991c01e |     cinder     |     volume    |   cinder Volume Service   || f26aceb7102d4a10b6f41931ee6f3dc7 |   cinder-srv   |     volume    |   cinder Volume Service   || a50652c6d46c41309146f529f9c6a83d |    cinderv2    |    volumev2   |   cinder Volume Service   || 3ff9a4a88531443585e213d3ae4f7e7e |  cinderv2-srv  |    volumev2   |   cinder Volume Service   || 1b42d0b28a0e41de847eb501da2662d7 |     glance     |     p_w_picpath     |    Glance Image Service   || 232caae6b9fc435390c86c47ff142a21 |   glance-srv   |     p_w_picpath     |    Glance Image Service   || 70d1d1db4f724893a13fa4c2c5dab06c |      heat      | orchestration |   Orchestration Service   || e5b515f3621743089e36ea76457380af |    heat-srv    | orchestration |   Orchestration Service   || 8a097854dffb4caa9c70145b3ef7248e |    keystone    |    identity   | Keystone Identity Service || 1fc72af27a134586b51bdf46b05537a4 |    neutron     |    network    |  Neutron Network Service  || 9dae9c8ff3d94b31839a29f91c2f7a99 |  neutron-srv   |    network    |  Neutron Network Service  || 422211388232443c9a9e105c561ab8f2 |      nova      |    compute    |    Nova Compute Service   || b663a1e197614f3d9c8c0f149323a56f |    nova-srv    |    compute    |    Nova Compute Service   |+----------------------------------+----------------+---------------+---------------------------+[root@SHBAK0801ctrl ~(keystone_admin)]# keystone endpoint-list+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+|                id                |   region  |                  publicurl                  |                 internalurl                 |                   adminurl                  |            service_id            |+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+| 0ce8852c776440da9e26462de296822e | regionOne |   http://controller:8774/v2/%(tenant_id)s   |   http://controller:8774/v2/%(tenant_id)s   |   http://controller:8774/v2/%(tenant_id)s   | 422211388232443c9a9e105c561ab8f2 || 4c801e7beca948869c6f3afd0fbde03c | regionTwo |          http://controller_vip:8777         |          http://controller_vip:8777         |          http://controller_vip:8777         | c22695a4c2b54d89a5971b67e1877289 || 5ec77aa7c1944eda87bad253bbfd8c95 | regionOne |   http://controller:8776/v1/%(tenant_id)s   |   http://controller:8776/v1/%(tenant_id)s   |   http://controller:8776/v1/%(tenant_id)s   | 67443adbe4c947419e67e63a2991c01e || 5fd853ff71f44f088d86702362741717 | regionOne |   http://controller:8004/v1/%(tenant_id)s   |   http://controller:8004/v1/%(tenant_id)s   |   http://controller:8004/v1/%(tenant_id)s   | 70d1d1db4f724893a13fa4c2c5dab06c || 61d8b2db04a7431780839a57ead79dd7 | regionTwo |          http://controller_vip:9292         |          http://controller_vip:9292         |          http://controller_vip:9292         | 232caae6b9fc435390c86c47ff142a21 || 8d1167755863493d823cd3d65d36cff2 | regionOne |            http://controller:8777           |            http://controller:8777           |            http://controller:8777           | 16bf05c877414ab9af01016c53af226e || 8e2bc0f9aefe4513801849a90f1d2987 | regionOne |         http://controller:5000/v2.0         |         http://controller:5000/v2.0         |         http://controller:35357/v2.0        | 8a097854dffb4caa9c70145b3ef7248e || 8e9931d761a14a198c973f9a3681ed72 | regionOne |            http://controller:9292           |            http://controller:9292           |            http://controller:9292           | 1b42d0b28a0e41de847eb501da2662d7 || a153896bcdb147b0ad35d1ee918c2819 | regionTwo | http://controller_vip:8776/v1/%(tenant_id)s | http://controller_vip:8776/v1/%(tenant_id)s | http://controller_vip:8776/v1/%(tenant_id)s | f26aceb7102d4a10b6f41931ee6f3dc7 || a280db7f54fa417ebd7374a7e12004ab | regionTwo | http://controller_vip:8774/v2/%(tenant_id)s | http://controller_vip:8774/v2/%(tenant_id)s | http://controller_vip:8774/v2/%(tenant_id)s | b663a1e197614f3d9c8c0f149323a56f || af92bb8761eb45a180a2df3de47c4d44 | regionOne |            http://controller:9696           |            http://controller:9696           |            http://controller:9696           | 1fc72af27a134586b51bdf46b05537a4 || ba91f1049674445aaaddda342b84f781 | regionOne |   http://controller:8776/v2/%(tenant_id)s   |   http://controller:8776/v2/%(tenant_id)s   |   http://controller:8776/v2/%(tenant_id)s   | a50652c6d46c41309146f529f9c6a83d || c28624a108a14b6ebd76eee81f6dd3ca | regionTwo |          http://controller_vip:9696         |          http://controller_vip:9696         |          http://controller_vip:9696         | 9dae9c8ff3d94b31839a29f91c2f7a99 || f2824f8425404625afcc653f4961b09a | regionTwo |       http://controller_vip:5000/v2.0       |       http://controller_vip:5000/v2.0       |       http://controller_vip:35357/v2.0      | 8a097854dffb4caa9c70145b3ef7248e || fd839e82cb894e9fb88f55abf0e059a7 | regionTwo | http://controller_vip:8004/v1/%(tenant_id)s | http://controller_vip:8004/v1/%(tenant_id)s | http://controller_vip:8004/v1/%(tenant_id)s | e5b515f3621743089e36ea76457380af |+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+注:multi-region情况下,keystone只需要保留一个就好(保留两个,遇到过一次大坑,原因还不是很清楚), 手动删除regionTwo的keystone endpoint。 另外两个region时间不同步的话,也会影响到keystone认证

5、修改第二套环境的haproxy配置文件,然后重启服务

[root@ceph01 ~(keystone_admin)]# cat /etc/haproxy/haproxy.cfg  # 两个controller都执行相同操作listen keystone_admin_cluster        bind controller_vip:35357        balance  source        option  tcpka        option  httpchk        option  tcplog        server SHBAK0801ctrl SHBAK0801ctrl:35357 check inter 2000 rise 2 fall 5  # 这里的SHBAK0801ctrl、SHBAK0802ctrl是第一套环境的两个keystone server        server SHBAK0802ctrl SHBAK0802ctrl:35357 check inter 2000 rise 2 fall 5  # dittolisten keystone_public_internal_cluster        bind controller_vip:5000        balance  source        option  tcpka        option  httpchk        option  tcplog        server SHBAK0801ctrl SHBAK0801ctrl:5000 check inter 2000 rise 2 fall 5        server SHBAK0802ctrl SHBAK0802ctrl:5000 check inter 2000 rise 2 fall 5        #listen heat_app_api                            #  注释这段内容#      bind controller_vip:8001#      balance  source#      option  tcpka#      option  tcplog#      server ceph01 ceph01:8001 check inter 2000 rise 2 fall 5#      server ceph02 ceph02:8001 check inter 2000 rise 2 fall 5                #listen dashboard_cluster                       #  注释这段内容#	  bind controller_vip:80#	  balance  source#	  option  tcpka#	  option  httpchk#	  option  tcplog#	  server ceph01 ceph01:80 check inter 2000 rise 2 fall 5#	  server ceph02 ceph02:80 check inter 2000 rise 2 fall 5service haproxy restart # 重启haproxy服务

6、修改其它组件(glance、nova、neutron、cinder、ceilometer)配置文件,两套环境都修改为各自的region

[root@ceph01 ~(keystone_admin)]# vim /etc/glance/glance-api.conf  # 以下只修改regionTwo,regionOne修改,下面不列出了[DEFAULT]os_region_name=regionTwo[root@ceph01 ~(keystone_admin)]# vim /etc/glance/glance-registry.conf[DEFAULT]os_region_name=regionTwo[root@ceph01 ~(keystone_admin)]# vim /etc/cinder/cinder.conf[DEFAULT]os_region_name=regionTwo[root@ceph01 ~(keystone_admin)]# vim /etc/neutron/neutron.conf[DEFAULT]nova_region_name = regionTwo[root@ceph01 ~(keystone_admin)]# vim /etc/nova/nova.conf[neutron]region_name = regionTwo[cinder]os_region_name = regionTwo[root@ceph01 ~(keystone_admin)]# vim /etc/ceilometer/ceilometer.conf[service_credentials]os_region_name = regionTwo# 记得regionOne也要修改,虽然这里没列出来

7、重启第二套openstack环境其它组件服务,Horizon检测到Keystone的endpoint有多个Region存在,UI上就可以支持。

8、修改keystonerc文件

[root@ceph01 ~(keystone_admin)]# cat keystonercexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_TENANT_NAME=adminexport OS_AUTH_URL=http://controller_vip:35357/v2.0export OS_REGION_NAME=regionTwo  # 加上这个,这样你就可以在命令行使用regionTwo了export PS1='[\u@\h \W(keystone_admin)]\$ '

参考链接