哈哈,今天接的部署openstack,也不知道领导为什么不是kolla的方案直接去部署openstack,算了不废话了,正式开始
1. 安装nova:
下面所有操作都在控制节点
# mysql -uroot -p000000
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';
flush privileges;
# openstack user create --domain default --password-prompt nova
密码还是还是延续000000
添加admin角色到nova用户
# openstack role add --project service --user nova admin
创建nova
# openstack service create --name nova --description "OpenStack Compute" compute
创建nova服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
这里是nova的控制节点,如果控制节点需要跑虚拟机的话还需要部署nova-computer相关的rpm包
在对应的域修改nova配置文件/etc/nova/nova.conf
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://rabbitmq:000000@controller
my_ip=10.20.21.XXX
[api_database]
connection=mysql+pymysql://nova:000000@controller/nova_api
[database]
connection=mysql+pymysql://nova:000000@controller/nova
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
[glance]
api_servers=http://controller:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://controller:5000/v3
username=placement
password=000000
同步nova_api数据库
# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元格
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
同步nova数据库
# su -s /bin/sh -c "nova-manage db sync" nova
# nova-manage cell_v2 list_cells
# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
2. 部署计算节点:
下面所有操作都在物理计算节点
# yum install openstack-nova-compute -y
在对应的域修改nova配置文件/etc/nova/nova.conf
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://rabbitmq:000000@controller
my_ip=10.12.21.XXX(compute节点ip)
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_url=http://controller:5000/v3
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=000000
[vnc]
enabled=True
server_listen=0.0.0.0
server_proxyclient_address=$my_ip
novncproxy_base_url=http://controller:6080/vnc_auto.html
[glance]
api_servers=http://controller:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://controller:5000/v3
username=placement
password=000000
处理器架构为arm64,需要在计算节点执行如下操作。
修改/usr/share/AAVMF目录所有者及属组,编辑/etc/libvirt/qemu.conf文件,增加nvram配置。
# chown -R nova:nova /usr/share/AAVMF
# vim /etc/libvirt/qemu.conf
nvram = [
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw"
]
2.3 虚拟机硬件加速确认
确认计算节点是否支持虚拟机硬件加速(x86_64)
处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:
# egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf的[libvirt]部分:
[libvirt]
virt_type = qemu
如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。
确认计算节点是否支持虚拟机硬件加速(arm64)
处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:
# virt-host-validate
显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。显示如下:
QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
编辑/etc/nova/nova.conf的[libvirt]部分:
[libvirt]
virt_type = qemu
显示PASS时,表示支持硬件加速,不需要进行额外的配置。显示如下:
QEMU: Checking if device /dev/kvm exists: PASS
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
回到controller节点执行,注意注意注意
# openstack compute service list --service nova-compute
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
3. 部署网络组件:
下面所有操作都在物理控制节点
# mysql -uroot -p000000
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
flush privileges;
# openstack user create --domain default --password-prompt neutron
密码还是默认的000000
添加admin角色到neutron用户
# openstack role add --project service --user neutron admin
创建neutron服务
#openstack service create --name neutron --description "OpenStack Networking" network
创建neutron服务端点
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696
LinuxBridge模式和OpenVSwitch模式只需选择其中一种进行配置
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
配置Neutron
在对应的域修改neutron配置文件/etc/neutron/neutron.conf
# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin=ml2
service_plugins=router
transport_url=rabbit://rabbitmq:000000@controller
auth_strategy=keystone
notify_nova_on_port_status_changes=true
notify_nova_on_port_data_changes=true
[database]
connection=mysql+pymysql://neutron:000000@controller/neutron
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=000000
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
[nova]#文件内新增nova域
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=nova
password=000000
修改ml2_conf配置文件/etc/neutron/plugins/ml2/ml2_conf.ini,添加以下域
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers=flat,vlan,vxlan
tenant_network_types=vxlan
mechanism_drivers=linuxbridge,l2population
extension_drivers=port_security
[ml2_type_flat]
flat_networks=provider
[ml2_type_vxlan]
vni_ranges=1:1000
[securitygroup]
enable_ipset=true
修改linuxbridge配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings=provider:eth1
[vxlan]
enable_vxlan=true
local_ip=10.12.20.XXX
l2_population=true
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
local_ip配置为物理网卡IP。vxlan网络的二层隧道流量将从具有此IP的网卡传输。本文档示例中使用管理网卡eth0 IP,生产中可使用另外单独的网卡
physical_interface_mappings需要在`provider:`后输入连接到provider网络的业务网卡名。在本文档示例中为eth1
在对应的域修改l3_agent配置文件/etc/neutron/l3_agent.ini
#vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver=linuxbridge
在对应的域修改dhcp_agent配置文件/etc/neutron/dhcp_agent.ini
# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver=linuxbridge
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=true
在对应的域修改metadata_agent配置文件/etc/neutron/metadata_agent.ini
此处000000为数据库密码
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host=controller
metadata_proxy_shared_secret=000000
在对应的域修改nova配置文件/etc/nova/nova.conf
#vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=000000
service_metadata_proxy=true
metadata_proxy_shared_secret=000000
配置网桥过滤器
# echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf
# echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf
# cat /etc/sysctl.conf
# modprobe br_netfilter
# sysctl -p
# sed -i '$amodprobe br_netfilter' /etc/rc.local
禁用IPV6
# echo net.ipv6.conf.all.disable_ipv6 = 1 >> /etc/sysctl.conf
# echo net.ipv6.conf.default.disable_ipv6 = 1 >> /etc/sysctl.conf
# sysctl -p
创建链接
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
# su -s /bin/sh -c "neutron-db-manage --config-file \
/etc/neutron/neutron.conf --config-file \
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启Nova服务
# systemctl restart openstack-nova-api.service
启动Neutron服务
# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
安装软件包
# yum install openstack-neutron-linuxbridge ebtables ipset -y
配置Neutron
在对应的域修改neutron配置文件/etc/neutron/neutron.conf
# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url=rabbit://rabbitmq:000000@controller
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
修改linuxbridge配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings=provider:eth1
[vxlan]
enable_vxlan=true
local_ip=10.12.20.XXX
l2_population=true
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
在对应的域修改nova配置文件/etc/nova/nova.conf
#vim /etc/nova/nova.conf
[neutron]
url=http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=000000
配置网桥过滤器
# echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf
# echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf
# cat /etc/sysctl.conf
# modprobe br_netfilter
# sysctl -p
# sed -i '$amodprobe br_netfilter' /etc/rc.local
禁用IPV6
# echo net.ipv6.conf.all.disable_ipv6 = 1 >> /etc/sysctl.conf
# echo net.ipv6.conf.default.disable_ipv6 = 1 >> /etc/sysctl.conf
# sysctl -p
重启Nova服务
# systemctl restart openstack-nova-compute.service
启动Neutron服务
# systemctl start neutron-linuxbridge-agent.service
# systemctl enable neutron-linuxbridge-agent.service
验证Neutron服务
controller节点执行,注意,注意,注意
# openstack network agent list
安装软件包
# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch openvswitch ebtables
配置Neutron
在对应的域修改neutron配置文件/etc/neutron/neutron.conf
# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://rabbitmq:000000@controller
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:000000@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[nova]#文件内新增nova域
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 000000
修改ml2_conf配置文件/etc/neutron/plugins/ml2/ml2_conf.ini,添加以下域
#vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
修改ovs配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini,添加以下域
#vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.100.1.110
bridge_mappings = provider:br-ex
[agent]
tunnel_types = vxlan,gre
l2_population = true
arp_responder = true
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
local_ip配置为物理网卡IP。vxlan网络的二层隧道流量将从具有此IP的网卡传输。本文档示例中使用管理网卡eth0 IP,生产中可使用另外单独的网卡。
在对应的域修改l3_agent配置文件/etc/neutron/l3_agent.ini
#vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
external_network_bridge = br-ex
在对应的域修改dhcp_agent配置文件/etc/neutron/dhcp_agent.ini
#vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
在对应的域修改metadata_agent配置文件/etc/neutron/metadata_agent.ini
#vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
在对应的域修改nova配置文件/etc/nova/nova.conf
#vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000
启动服务
#systemctl enable openvswitch
#systemctl start openvswitch
#systemctl status openvswitch
建立网桥
此处网卡名为连接Provider网络的业务网卡名。
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-exeth1
ovs-vsctl show
创建连接
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
# su -s /bin/sh -c "neutron-db-manage --config-file \
/etc/neutron/neutron.conf --config-file \
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启Nova服务
# systemctl restart openstack-nova-api.service
启动Neutron服务
# systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
# systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
在计算节点执行
安装软件包
# yum install openvswitch openstack-neutron-openvswitch ebtables ipset -y
配置Neutron
在对应的域修改neutron配置文件/etc/neutron/neutron.conf
# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://rabbitmq:000000@controller
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
修改ovs配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini,添加以下域
# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.100.1.20
bridge_mappings = provider:br-ex
[agent]
tunnel_types = vxlan,gre
l2_population = true
arp_responder = true
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
在对应的域修改nova配置文件/etc/nova/nova.conf
# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
启动服务
安装ovs模块,如无pip命令,安装python3-pip
# pip3 install ovs
启动服务
# systemctl enable openvswitch
# systemctl start openvswitch
# systemctl status openvswitch
建立网桥
此处网卡名为连接Provider网络的业务网卡名。
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-exeth1
ovs-vsctl show
重启Nova服务
# systemctl restart openstack-nova-compute.service
# systemctl status openstack-nova-compute.service
启动Neutron服务
# systemctl start neutron-openvswitch-agent.service
# systemctl enable neutron-openvswitch-agent.service
# systemctl status neutron-openvswitch-agent.service
验证Neutron服务
controller节点执行
# openstack network agent list
4. 部署计算节点:
controller节点执行
# yum install -y openstack-dashboard
找到对应地方修改dashboard配置文件local_settings,如没有相应字段则添加
#vim /etc/openstack-dashboard/local_settings
ALLOWED_HOSTS = ['*', 'two.example.com']
WEBROOT = '/dashboard/'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
修改openstack-dashboard.conf配置文件,在第3-4行空白处添加如下字段
# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
4.3重启服务
# systemctl restart httpd.service memcached.service
# systemctl status httpd.service memcached.service
4.4访问Web服务
地址:http://10.12.20.XXX/dashboard
域:default
用户:admin
5. 验证创建云主机:
# ssh-keygen -q -N ""
# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# openstack keypair list
创建provider网络(配置链接外网)
# source ~/admin-openrc
# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
设置provider地址池(ip地址池为创建网桥所用的IP网段)
#openstack subnet create --network provider --allocation-pool start=10.100.1.100,end=10.100.1.200--gateway10.100.1.1--subnet-range10.100.1.0/24provider
创建self-service网络(内网nat网络)
# openstack network create selfservice
设置self-service地址池(可随意设置)
#openstack subnet create --network selfservice --gateway 10.100.2.1 --subnet-range 10.100.2.0/24 selfservice
# openstack router create router
添加self-service网络到router接口
# neutron router-interface-add router selfservice
为router设置网关
# neutron router-gateway-set router provider
验证router
# ip netns
查看网络
# neutron router-port-list router
0:实例编号;1:cpu;1024:内存;20:磁盘大小;name:实例类型名称;也可在web页面进行创建
# openstack flavor create --id0--vcpus1--ram1024--disk20name
# openstack image list
# openstack subnet list
# openstack server create --flavor<flavor-id>--image<image-id> <server-name>--network<network-id>
flavor-id:实例类型ID或者名称;
image-id:镜像ID或者名称;
server-name:实例名称;
network-id:网络ID或者名称
请根据实际情况替换使用
终于搭建起来~ 恭喜一下自己~