首页 > 其他分享 >openstack高可用实现

openstack高可用实现

时间:2023-10-08 09:23:54浏览次数:33  
标签:10.0 可用 实现 etc controller2 openstack root neutron

 

openstack controller实现高可用

#安装基础包
[root@openstack-controller2 ~]#yum install centos-release-openstack-train.noarch -y

[root@openstack-controller2 ~]#yum install python-openstackclient openstack-selinux -y

[root@openstack-controller2 ~]#yum install -y python2-PyMySQL python-memcached

#安装keystone(需要禁用epel源)
[root@openstack-controller2 ~]#yum install openstack-keystone httpd mod_wsgi -y

#在控制节点打包keystone的配置文件/etc/keystone/*复制到102节点。
[root@openstack-controller1 ~]# ssh-copy-id 10.0.0.102
[root@openstack-controller1 ~]# rsync -ar /etc/keystone/* 10.0.0.102:/etc/keystone

[root@openstack-controller2 ~]#echo "10.0.0.188 openstack-vip.tan.local" >> /etc/hosts

[root@openstack-controller2 ~]#vim /etc/httpd/conf/httpd.conf
ServerName 10.0.0.102:80

[root@openstack-controller2 ~]#ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

[root@openstack-controller2 ~]#systemctl restart httpd
[root@openstack-controller2 ~]#systemctl enable httpd
[root@openstack-controller2 ~]# ss -tnl|grep 5000
LISTEN 0 128 [::]:5000 [::]:*


#验证keystone
#haproxy就可以修改5000端口的后端主机为102的主机了,重启haproxy
listen openstack-keystone-5000
bind 10.0.0.188:5000
mode tcp
#server 10.0.0.101 10.0.0.101:5000 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:5000 check inter 3s fall 3 rise 5

#以下命令验证keystone是否可以正常转发到102
#在controller1节点执行命令,会到vip,转到102去执行操作,成功返回证明102提供服务正常。
[root@openstack-controller1 ~]#source admin-openrc
[root@openstack-controller1 ~]# openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 317a4ea405d745bbb9e5d76fc87a0751 | admin |
| b107b3fb4ab445d093db6730fc794b4b | myuser |
| 5e66cc26e56b443996d24f0d687b2cdc | glance |
| ff53352e7b714d899b5a90f34054cefd | placement |
| 3ab7cf5eea184cb480adcb0aa770abd5 | nova |
| 3cf452cb70e74bdbb39a9597aced8f75 | neutron |
| 607c1e56870741d99977d0f3dd4779b0 | cinder |
+----------------------------------+-----------+
[root@openstack-controller1 ~]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+---------------------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------------------------------+-------------------+-------+----------------+---------------------------+
| 4d78d03b-2691-4072-ae6a-f09cd8af3603 | Linux bridge agent | openstack-compute1.tan.local | | :-) | True | neutron-linuxbridge-agent |
| 59390685-4dae-487d-9e15-f0cdf1709ffe | DHCP agent | openstack-controller1.tan.local | nova | :-) | True | neutron-dhcp-agent |
| 5ab3fb3b-4f38-41aa-a750-b558d98e3a38 | Linux bridge agent | openstack-controller1.tan.local | | :-) | True | neutron-linuxbridge-agent |
| 8dfaf895-002c-470c-9736-89ac0696454e | Metadata agent | openstack-controller1.tan.local | | :-) | True | neutron-metadata-agent |
| f2e9c598-e6ea-494d-a550-07aa33d29bef | Linux bridge agent | openstack-compute2.tan.local | | xxx | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+---------------------------------+-------------------+-------+----------------+---------------------------+



安装glance
[root@openstack-controller2 ~]#yum install -y openstack-glance

[root@openstack-controller2 ~]#mkdir /var/lib/glance/images
[root@openstack-controller2 ~]#chown glance.glance /var/lib/glance/images -R

[root@openstack-controller2 ~]# showmount -e 10.0.0.105
Export list for 10.0.0.105:
/data/glance *
[root@openstack-controller2 ~]# mount 10.0.0.105:/data/glance /var/lib/glance/images/
[root@openstack-controller2 ~]# ll /var/lib/glance/images/
total 1791372
-rw-r----- 1 glance glance 12716032 Sep 21 19:45 28787e4d-9b89-455f-88ad-aaeb82ee29f0
-rw-r----- 1 glance glance 1821638656 Sep 22 11:40 718c4e16-f8e6-4d43-9b38-3f1d44429067

[root@openstack-controller2 ~]#Vim /etc/fstab
10.0.0.105:/data/glance /var/lib/glance/images nfs defaults,_netdev 0 0
Mount -a

#在控制节点打包glance的配置文件/etc/glance/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/glance/* 10.0.0.102:/etc/glance

[root@openstack-controller2 ~]#systemctl restart openstack-glance-api.service
[root@openstack-controller2 ~]#systemctl enable openstack-glance-api.service
[root@openstack-controller2 ~]# ss -tnl |grep 9292
LISTEN 0 128 *:9292 *:*

验证glance
#haproxy就可以修改9292端口的后端主机改为102了,重启haproxy
[root@openstack-ha1 ~]# vim /etc/haproxy/haproxy.cfg
listen openstack-glance-9292
bind 10.0.0.188:9292
mode tcp
#server 10.0.0.101 10.0.0.101:9292 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:9292 check inter 3s fall 3 rise 5

#以下命令验证glance是否可以正常转发到102
[root@openstack-controller1 ~]#source admin-openrc.sh
[root@openstack-controller1 ~]# openstack image list
+--------------------------------------+---------------+--------+
| ID | Name | Status |
+--------------------------------------+---------------+--------+
| 718c4e16-f8e6-4d43-9b38-3f1d44429067 | centos7-image | active |
| 28787e4d-9b89-455f-88ad-aaeb82ee29f0 | cirros-0.4.0 | active |
+--------------------------------------+---------------+--------+



安装placement
[root@openstack-controller2 ~]#yum install -y openstack-placement-api

#在控制节点拷贝placement的配置文件/etc/placement/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/placement/* 10.0.0.102:/etc/placement

#grep 10.0.0 /etc/placement/* -R只要没有监听本地地址的,就不需要改配置文件。

#placement配置文件有bug,最后加入几行代码放开权限。
[root@openstack-controller2 ~]#vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
[root@openstack-controller2 ~]#systemctl restart httpd

验证placement
#haproxy就可以修改8778端口的后端主机改为102了,重启haproxy
[root@openstack-ha1 ~]# vim /etc/haproxy/haproxy.cfg
listen openstack-placement-8778
bind 10.0.0.188:8778
mode tcp
#server 10.0.0.101 10.0.0.101:8778 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:8778 check inter 3s fall 3 rise 5

#以下命令验证placement是否可以正常转发到102
[root@openstack-controller1 ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+


安装nova-controller
[root@openstack-controller2 ~]#yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

#在控制节点拷贝nova的配置文件/etc/nova/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/nova/* 10.0.0.102:/etc/nova

#解压配置文件到指定目录后,grep 10.0.0 /etc/nova/* -R只要没有监听本地地址的,就不需要改配置文件。

#有监听本地地址的。修改server_liste = 10.0.0.102 server_proxyclient_address = 10.0.0.102


[root@openstack-controller2 ~]# systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
[root@openstack-controller2 ~]# systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

验证nova-controller
#haproxy就可以修改8774(nova-controller)端口和6080端口(novncproxy)的后端主机改为102了,重启haproxy
listen openstack-novacontroller-8774
bind 10.0.0.188:8774
mode tcp
#server 10.0.0.101 10.0.0.101:8774 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:8774 check inter 3s fall 3 rise 5

listen openstack-nova-novncproxy-6080
bind 10.0.0.188:6080
mode tcp
#server 10.0.0.101 10.0.0.101:6080 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:6080 check inter 3s fall 3 rise 5

#以下命令验证nova是否可以正常转发到102
[root@openstack-controller1 ~]# nova service-list
+--------------------------------------+----------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+----------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| d2c9f15a-d8e6-472d-8950-f5d7b56be512 | nova-conductor | openstack-controller1.tan.local | internal | enabled | up | 2022-09-22T08:34:50.000000 | - | False |
| 54829f2e-4aa7-44cf-85fc-9dbee8ffa60b | nova-scheduler | openstack-controller1.tan.local | internal | enabled | up | 2022-09-22T08:34:48.000000 | - | False |
| 523bf526-6aac-4fac-a904-aa9c36df7e74 | nova-compute | openstack-compute1.tan.local | nova | enabled | up | 2022-09-22T08:34:47.000000 | - | False |
| 0913af81-7ad8-4136-854c-c5315834176e | nova-compute | openstack-compute2.tan.local | nova | enabled | up | 2022-09-22T08:34:46.000000 | - | False |
| dffaff4d-7bd7-4a2d-9028-8bb9b3ebb5f5 | nova-conductor | openstack-controller2.tan.local | internal | enabled | up | 2022-09-22T08:34:51.000000 | - | False |
| 7ac1e447-6b81-4a79-bd41-196d054da348 | nova-scheduler | openstack-controller2.tan.local | internal | enabled | up | 2022-09-22T08:34:53.000000 | - | False |
+--------------------------------------+----------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+




安装neutron-controller
[root@openstack-controller2 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

#在控制节点打包neutron的配置文件/etc/neutron/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/neutron/* 10.0.0.102:/etc/neutron

#解压配置文件到指定目录后,grep 10.0.0 /etc/neutron/* -R只要没有监听本地地址的,就不需要改配置文件。


[root@openstack-controller2 ~]# systemctl restart openstack-nova-api.service

[root@openstack-controller2 ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@openstack-controller2 ~]# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

验证neutron-controller
#haproxy就可以修改9696(neutron-controller)端口的后端主机改为102了,重启haproxy
listen openstack-neutron-controller-9696
bind 10.0.0.188:9696
mode tcp
#server 10.0.0.101 10.0.0.101:9696 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:9696 check inter 3s fall 3 rise 5

#以下命令验证neutron是否可以正常转发到102
[root@openstack-controller1 ~]# openstack extension list --network
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Alias | Description |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Subnet Pool Prefix Operations | subnetpool-prefix-ops | Provides support for adjusting the prefix list of subnet pools |
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Subnet Onboard | subnet_onboard | Provides support for onboarding subnets into subnet pools |
| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Availability Zone | availability_zone | The availability zone extension. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| Tag support for resources with standard attribute: subnet, trunk, network_segment_range, router, network, policy, subnetpool, port, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. |
| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource |
| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. |
| Filter parameters validation | filter-validation | Provides validation on filter parameters. |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |
| Address scope | address-scope | Address scopes extension. |
| Agent's Resource View Synced to Placement | agent-resources-synced | Stores success/failure of last sync to Placement |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate |
| Add security_group type to network RBAC | rbac-security-groups | Add security_group type to network RBAC |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
| Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external application |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+


安装dashboard
[root@openstack-controller2 ~]# yum install -y openstack-dashboard
#在控制节点打包dashboard的配置文件/etc/openstack-dashboard/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/openstack-dashboard/* 10.0.0.102:/etc/openstack-dashboard

#解压配置文件到指定目录后,grep 10.0.0 /etc/openstack-dashboard/* -R只要没有监听本地地址的,就不需要改配置文件。
#有监听原来的本地地址,修改为当前的本机地址。OPENSTACK_HOST和ALLOWED_HOST的地址。
[root@openstack-controller2 ~]# vim /etc/openstack-dashboard/local_settings
ALLOWED_HOSTS = ['10.0.0.102', 'openstack-vip.tan.local']
OPENSTACK_HOST = "10.0.0.102"


[root@openstack-controller2 ~]# systemctl restart httpd.service

验证dashboard
#haproxy就可以修改80(dashboard)端口的后端主机改为102了,重启haproxy
[root@openstack-ha1 ~]# vim /etc/haproxy/haproxy.cfg
listen openstack-dashboard-80
bind 10.0.0.188:80
mode tcp
#server 10.0.0.101 10.0.0.101:80 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:80 check inter 3s fall 3 rise 5

#以下命令验证dashboard是否可以正常转发到102
网页访问http://openstack-vip.tan.local/dashboard

cinder服务安装
#包安装
[root@openstack-controller2 ~]#yum install openstack-cinder -y

#在控制节点打包neutron的配置文件/etc/neutron/*复制到102节点。
[root@openstack-controller1 ~]# rsync -ra /etc/neutron/* 10.0.0.102:/etc/neutron

#解压配置文件到指定目录后,grep 10.0.0 /etc/neutron/* -R只要没有监听本地地址的,就不需要改配置文件。
[root@openstack-controller2 ~]#grep 10.0.0 /etc/cinder/* -R
[root@openstack-controller2 ~]# systemctl restart openstack-nova-api.service
[root@openstack-controller2 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service --now
[root@openstack-controller2 ~]# ss -tnl |grep 8776
LISTEN 0 128 *:8776 *:*

#验证是否自动挂载
[root@openstack-controller2 ~]# systemctl restart openstack-cinder-volume.service
[root@openstack-controller2 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 12M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 15G 2.7G 13G 18% /
/dev/sda1 xfs 1014M 138M 877M 14% /boot
/dev/mapper/centos-home xfs 2.0G 33M 2.0G 2% /home
tmpfs tmpfs 394M 0 394M 0% /run/user/0
10.0.0.105:/data/glance nfs4 15G 4.2G 11G 28% /var/lib/glance/images
10.0.0.103:/nfsdata nfs4 15G 2.2G 13G 15% /var/lib/cinder/mnt/d0249c90bf1851a8e7199c54ea9417a9


#haproxy改后端主机为102,并重启haproxy
[root@openstack-ha1 ~]# vim /etc/haproxy/haproxy.cfg
listen openstack-nova-cinder-8776
bind 10.0.0.188:8776
mode tcp
#server 10.0.0.101 10.0.0.101:8776 check inter 3s fall 3 rise 5
server 10.0.0.102 10.0.0.102:8776 check inter 3s fall 3 rise 5

#controller1节点验证是否还可以访问,这就是转发到102去执行成功
[root@openstack-controller1 ~]# openstack volume service list
+------------------+-------------------------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | openstack-controller1.tan.local | nova | enabled | up | 2022-09-22T08:54:36.000000 |
| cinder-volume | openstack-ha1.tan.local@lvm | nova | enabled | up | 2022-09-22T08:54:34.000000 |
| cinder-volume | openstack-controller1.tan.local@nfs | nova | enabled | down | 2022-09-22T06:08:04.000000 |
| cinder-scheduler | openstack-controller2.tan.local | nova | enabled | up | 2022-09-22T08:54:39.000000 |
+------------------+-------------------------------------+------+---------+-------+----------------------------+
#把haproxy的两个节点都允许转发,即可实现控制节点高可用

#最后想起来之前node节点快速搭建时没有部署cinder,也是一样拷贝文件重启服务即可。

openstack 实例实现高可用

#通过keepalived+haproxy vip实现两台虚拟机高可用
#两台虚拟机,10.0.0.88 主机名centos77 10.0.0.89 主机名centos7
安装 haproxy+keepalived:
[root@centos7 ~]# yum install haproxy keepalived –y #IP 地址为 10.0.0.89
[root@centos77 ~]# yum install haproxy keepalived –y #IP 地址为 10.0.0.88

#关联 VIP 指定实例:
#将 VIP 关连至安全组:
[root@openstack-controller1 ~]# neutron port-create --fixed-ip ip_address=10.0.0.160 --security-group e2a17048-c290-443d-bdb9-7ad34cb43e5d external-net
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new port:
+-----------------------+-----------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| created_at | 2022-09-22T07:04:32Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "92ecedc2-3488-435f-af81-b0964d25ac3c", "ip_address": "10.0.0.160"} |
| id | 010370fe-4876-43b4-8a9d-56b6cc6b7bdd |
| mac_address | fa:16:3e:fc:93:ca |
| name | |
| network_id | 86a33d1d-a373-4c7d-90fe-31c4d3018546 |
| port_security_enabled | True |
| project_id | f7c80417780a4bd59beaa38a9d36271e |
| revision_number | 1 |
| security_groups | e2a17048-c290-443d-bdb9-7ad34cb43e5d |
| status | DOWN |
| tags | |
| tenant_id | f7c80417780a4bd59beaa38a9d36271e |
| updated_at | 2022-09-22T07:04:33Z |
+-----------------------+-----------------------------------------------------------------------------------+

#列出各实例的 port ID:
[root@openstack-controller1 ~]# openstack port list | grep 10.0.0.89
| dbb0c17d-f55e-46c5-8bc3-3b7a78eb1050 | | fa:16:3e:51:e4:47 | ip_address='10.0.0.89', subnet_id='92ecedc2-3488-435f-af81-b0964d25ac3c' | ACTIVE |
[root@openstack-controller1 ~]# openstack port list | grep 10.0.0.88
| a5c4b35c-271a-472e-a419-f9010dc5e0f4 | | fa:16:3e:f6:e9:72 | ip_address='10.0.0.88', subnet_id='92ecedc2-3488-435f-af81-b0964d25ac3c' | ACTIVE |

15.3.3.3:将 VIP 关联到实例:
[root@openstack-controller1 ~]# neutron port-update dbb0c17d-f55e-46c5-8bc3-3b7a78eb1050 --allowed_address_pairs list=true type=dict ip_address=10.0.0.160
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Updated port: dbb0c17d-f55e-46c5-8bc3-3b7a78eb1050
[root@openstack-controller1 ~]# neutron port-update a5c4b35c-271a-472e-a419-f9010dc5e0f4 --allowed_address_pairs list=true type=dict ip_address=10.0.0.160
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Updated port: a5c4b35c-271a-472e-a419-f9010dc5e0f4


#配置 keepalived 及组策略:
#master 配置:
[root@centos7 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 89
priority 100
advert_int 1
unicast_src_ip 10.0.0.89
unicast_peer {
10.0.0.88
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.160/24 dev eth0 label eth0:0
}
}

15.3.4.2:slave 配置:
[root@centos77 ~]# vi /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 89
priority 80
advert_int 1
unicast_src_ip 10.0.0.88
unicast_peer {
10.0.0.89
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.160/24 dev eth0 label eth0:0
}
}

#配置安全组策略: #keepalived 给予 vrrp 协议进行报文的传播,需要在 openstack 组策略单独进行放开112端口,我这里因为之前配置了允许所有icmp,tcp,udp协议的出入口规则,这里就不用配置了。

#启动 keepalived 并验证:
#各实例启动 keepalived:
[root@centos7 ~]# systemctl start keepalived
[root@centos7 ~]# systemctl enable keepalived
[root@centos77 ~]# systemctl start keepalived
[root@centos77 ~]# systemctl enable keepalived
#验证 VIP:
#master:
[root@centos7 ~]# ip a |grep 10.0.0
inet 10.0.0.89/24 brd 10.0.0.255 scope global noprefixroute dynamic eth0
inet 10.0.0.160/24 scope global secondary eth0:0
#backup:
[root@centos77 ~]# ip a |grep 10.0.0
inet 10.0.0.88/24 brd 10.0.0.255 scope global noprefixroute dynamic eth0

#验证 VIP 通信:
[root@openstack-controller1 ~]# ping 10.0.0.160
PING 10.0.0.160 (10.0.0.160) 56(84) bytes of data.
64 bytes from 10.0.0.160: icmp_seq=1 ttl=64 time=2.50 ms
64 bytes from 10.0.0.160: icmp_seq=2 ttl=64 time=2.01 ms

#验证 VIP 切换:
#master 停止 VIP,vip绑定backup节点了。
[root@centos7 ~]# systemctl stop keepalived
[root@centos77 ~]# ip a |grep 10.0.0
inet 10.0.0.88/24 brd 10.0.0.255 scope global noprefixroute dynamic eth0
inet 10.0.0.160/24 scope global secondary eth0:0

#haproxy 配置:
两台虚拟机分别安装 haproxy,结合 keepalived 实现高可用负载均衡。
#安装 haproxy 和 web 服务:
[root@centos7 ~]# yum install haproxy httpd –y
[root@centos7 ~]# echo "10.0.0.89" > /var/www/html/index.html
[root@centos7 ~]# sed -i 's/Listen 80/Listen 8090/g' /etc/httpd/conf/httpd.conf
[root@centos7 ~]# systemctl restart httpd
[root@centos7 ~]# ss -tnl |grep 8090
LISTEN 0 128 [::]:8090 [::]:*

[root@centos77 ~]# yum install haproxy httpd -y
[root@centos77 ~]# echo "10.0.0.88" > /var/www/html/index.html
[root@centos77 ~]# sed -i 's/Listen 80/Listen 8090/g' /etc/httpd/conf/httpd.conf
[root@centos77 ~]# systemctl restart httpd
[root@centos77 ~]# ss -tnl |grep 8090
LISTEN 0 128 [::]:8090 [::]:*

#配置 haproxy: 两台配置一样
[root@centos7 ~]# cat /etc/haproxy/haproxy.cfg
listen web-80
bind 10.0.0.160:80
mode http
server 10.0.0.89 10.0.0.89:8090 check inter 2000 fall 3 rise 5
server 10.0.0.88 10.0.0.88:8090 check inter 2000 fall 3 rise 5


#配置内核参数: 各虚拟机配置允许监听非本地 IP 并开启转发
[root@centos7 ~]# cat /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
[root@centos7 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

#启动 http 并验证访问:
[root@centos7 ~]# systemctl restart haproxy
[root@centos77 ~]# systemctl restart haproxy

[root@openstack-controller1 ~]# curl 10.0.0.160
10.0.0.89

[root@openstack-controller1 ~]# curl 10.0.0.160
10.0.0.88

标签:10.0,可用,实现,etc,controller2,openstack,root,neutron
From: https://www.cnblogs.com/tanll/p/17748041.html

相关文章

  • k8s部署jenkins+gitlab实现代码发布
     1.#k8s部署jenkins#k8s中运行jenkins服务,gitlab还是跑在物理机上,因为经常需要clone修改代码。#基于java命令,运⾏javawar包或jar包,本次以jenkins.war包部署⽅式为例,jenkins的数据保存⾄外部存储(NFS或者PVC)。1.1#镜像⽬录⽂件:#pwd/opt/k8s-data/docke......
  • 高效数据管理:Java助力实现Excel数据验证
    摘要:本文由葡萄城技术团队原创并首发。转载请注明出处:葡萄城官网,葡萄城为开发者提供专业的开发工具、解决方案和服务,赋能开发者。前言在Java中,开发者可以使用一些开源的库(如ApachePOI)来添加、修改和处理Excel中的数据:包括数字、文本、日期、列表等。每种数据验证类型都具有不......
  • 研发三维GIS系统笔记/实现wgs84投影-001
    1.工作内容,改造引擎,支持wgs84投影改造原因:目前投影是墨卡托投影(与GoogleMap一致)目前的GIS系统是二维的采用这个坐标系是没有问题的但不支持wgs84瓦片数据以及高程数据,工作中很多数据是wgs84格式的,尤其很多三维GIS都是采用wgs84投影wgs84与mercator从数据上......
  • 小目标6:下载文件功能的实现
    小目标6:下载文件功能的实现指定文件的下载功能:客户端用户输入服务器目录中的文件名,服务器打开这个文件,读取文件的内容,发送给客户端 实现:服务器端打开某个文件并读取文件,然后把内容传给客户端服务器端定义一个函数server_file_download用于打开文件读取内容+传送给客户端,我......
  • Langchain-Chatchat项目:3-Langchain计算器工具Agent思路和实现
      本文主要讨论Langchain-Chatchat项目中自定义Agent问答的思路和实现。以"计算器工具"为例,简单理解就是通过LLM识别应该使用的工具类型,然后交给相应的工具(也是LLM模型)来解决问题。一个LLM模型可以充当不同的角色,要把结构化的Prompt模板写好,充分利用LLM的Zero/One/Few-Shot能力......
  • 基于 Linux、C++实现的高性能内存池
    1.引入内存池的意义  内存池(MemoryPool)是一种内存分配方式,又被称为固定大小区块规划(fixed-size-blocksallocation)。通常我们习惯直接使用new、malloc等API申请分配内存,但是这种方式非常容易产生内存碎片,早晚都会申请内存失败。并且在比较复杂的代码或者继承的屎山......
  • Mysql实现EF Core CodeFirst实现
    一、引用包Microsoft.EntityFrameworkCore.ToolsPomelo.EntityFrameworkCore.MySql二、常用命令1、Add-Migration(版本号)创建新的CodeFirst文件2、Update-Database更新数据库三、代码实现///<summary>///系统用户///</summary>[Table("SysU......
  • Pycharm连接远程服务器并实现远程调试
    Pycharm连接远程服务器并实现远程调试Pycharm连接远程服务器并实现远程调试连接远程服务器同步代码配置远程解释器进行调试连接远程服务器1、点击Tools(工具),点击部署Deployment(部署),点击Configuration(配置)2、新增一个SFTP协议的链接3、给链接命名4、配置服务器信息......
  • 多线程,实现Callable接口
    这里改变了之前Thread和Runnable接口的下载网络图片的代码是要下载器类的,下面并没有写出来一、实现Callable接口,重写call()方法  是需要返回值的      好处:可以设置返回值和可以抛出异常 二、与Thread和Runnable接口不一样的地方,是需要四部来开启线程的, Exe......
  • 学习Runnable接口来实现多线程
    1、先创建一个线程类来实现Runable接口 2、跟Thread类的一样照样调用FileUtils文件工具类创建下载器 3、对下载器的形参在线程类中创建属性,用构造方法对属性赋值,并且重写run方法,run方法中实例化下载器 4、实例化Runnable接口并且调用start方法 这里Runnable接口和T......