news 2026/2/14 15:18:50

uos server 1070e部署OpenStack基础篇-下篇

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
uos server 1070e部署OpenStack基础篇-下篇

哈哈,今天接的部署openstack,也不知道领导为什么不是kolla的方案直接去部署openstack,算了不废话了,正式开始

1. 安装nova:

下面所有操作都在控制节点

1.1 创建Nova数据库

# mysql -uroot -p000000

CREATE DATABASE nova_api;

CREATE DATABASE nova;

CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';

flush privileges;

1.2 创建用户

# openstack user create --domain default --password-prompt nova

密码还是还是延续000000

添加admin角色到nova用户

# openstack role add --project service --user nova admin

1.3 创建服务

创建nova

# openstack service create --name nova --description "OpenStack Compute" compute

创建nova服务端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

1.4 安装软件包

# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

这里是nova的控制节点,如果控制节点需要跑虚拟机的话还需要部署nova-computer相关的rpm包

1.5 配置Nova

在对应的域修改nova配置文件/etc/nova/nova.conf

# vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis=osapi_compute,metadata

transport_url=rabbit://rabbitmq:000000@controller

my_ip=10.20.21.XXX

[api_database]

connection=mysql+pymysql://nova:000000@controller/nova_api

[database]

connection=mysql+pymysql://nova:000000@controller/nova

[api]

auth_strategy=keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = 000000

[vnc]

enabled=true

server_listen=$my_ip

server_proxyclient_address=$my_ip

[glance]

api_servers=http://controller:9292

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[placement]

os_region_name=RegionOne

project_domain_name=Default

project_name=service

auth_type=password

user_domain_name=Default

auth_url=http://controller:5000/v3

username=placement

password=000000

1.6 同步数据库

同步nova_api数据库

# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库

# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元格

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库

# su -s /bin/sh -c "nova-manage db sync" nova

1.7 验证配置

# nova-manage cell_v2 list_cells

1.8 启动服务

# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

2. 部署计算节点:

下面所有操作都在物理计算节点

2.1 安装软件包

# yum install openstack-nova-compute -y

2.2 配置Nova

在对应的域修改nova配置文件/etc/nova/nova.conf

# vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis=osapi_compute,metadata

transport_url=rabbit://rabbitmq:000000@controller

my_ip=10.12.21.XXX(compute节点ip)

[api]

auth_strategy=keystone

[keystone_authtoken]

auth_url=http://controller:5000/v3

memcached_servers=controller:11211

auth_type=password

project_domain_name=default

user_domain_name=default

project_name=service

username=nova

password=000000

[vnc]

enabled=True

server_listen=0.0.0.0

server_proxyclient_address=$my_ip

novncproxy_base_url=http://controller:6080/vnc_auto.html

[glance]

api_servers=http://controller:9292

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[placement]

os_region_name=RegionOne

project_domain_name=Default

project_name=service

auth_type=password

user_domain_name=Default

auth_url=http://controller:5000/v3

username=placement

password=000000

处理器架构为arm64,需要在计算节点执行如下操作。

修改/usr/share/AAVMF目录所有者及属组,编辑/etc/libvirt/qemu.conf文件,增加nvram配置。

# chown -R nova:nova /usr/share/AAVMF

# vim /etc/libvirt/qemu.conf

nvram = [

"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",

"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw"

]

2.3 虚拟机硬件加速确认

确认计算节点是否支持虚拟机硬件加速(x86_64)

处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

# egrep -c '(vmx|svm)' /proc/cpuinfo

如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf的[libvirt]部分:

[libvirt]

virt_type = qemu

如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

确认计算节点是否支持虚拟机硬件加速(arm64)

处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

# virt-host-validate

显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。显示如下:

QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)

编辑/etc/nova/nova.conf的[libvirt]部分:

[libvirt]

virt_type = qemu

显示PASS时,表示支持硬件加速,不需要进行额外的配置。显示如下:

QEMU: Checking if device /dev/kvm exists: PASS

2.4 启动服务

# systemctl enable libvirtd.service openstack-nova-compute.service

# systemctl start libvirtd.service openstack-nova-compute.service

2.5 验证服务

回到controller节点执行,注意注意注意

# openstack compute service list --service nova-compute

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

3. 部署网络组件:

下面所有操作都在物理控制节点

3.1 创建Neutron数据库

# mysql -uroot -p000000

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';

flush privileges;

3.2 创建用户

# openstack user create --domain default --password-prompt neutron

密码还是默认的000000

添加admin角色到neutron用户

# openstack role add --project service --user neutron admin

3.3 创建服务

创建neutron服务

#openstack service create --name neutron --description "OpenStack Networking" network

创建neutron服务端点

# openstack endpoint create --region RegionOne network public http://controller:9696

# openstack endpoint create --region RegionOne network internal http://controller:9696

# openstack endpoint create --region RegionOne network admin http://controller:9696

3.4 使用LinuxBridge模式

LinuxBridge模式和OpenVSwitch模式只需选择其中一种进行配置

3.5 控制节点安装包

# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置Neutron

在对应的域修改neutron配置文件/etc/neutron/neutron.conf

# vim /etc/neutron/neutron.conf

[DEFAULT]

core_plugin=ml2

service_plugins=router

transport_url=rabbit://rabbitmq:000000@controller

auth_strategy=keystone

notify_nova_on_port_status_changes=true

notify_nova_on_port_data_changes=true

[database]

connection=mysql+pymysql://neutron:000000@controller/neutron

[keystone_authtoken]

auth_uri=http://controller:5000

auth_url=http://controller:35357

memcached_servers=controller:11211

auth_type=password

project_domain_name=default

user_domain_name=default

project_name=service

username=neutron

password=000000

[oslo_concurrency]

lock_path=/var/lib/neutron/tmp

[nova]#文件内新增nova域

auth_url=http://controller:35357

auth_type=password

project_domain_name=default

user_domain_name=default

region_name=RegionOne

project_name=service

username=nova

password=000000

修改ml2_conf配置文件/etc/neutron/plugins/ml2/ml2_conf.ini,添加以下域

# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=flat,vlan,vxlan

tenant_network_types=vxlan

mechanism_drivers=linuxbridge,l2population

extension_drivers=port_security

[ml2_type_flat]

flat_networks=provider

[ml2_type_vxlan]

vni_ranges=1:1000

[securitygroup]

enable_ipset=true

修改linuxbridge配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings=provider:eth1

[vxlan]

enable_vxlan=true

local_ip=10.12.20.XXX

l2_population=true

[securitygroup]

enable_security_group=true

firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

local_ip配置为物理网卡IP。vxlan网络的二层隧道流量将从具有此IP的网卡传输。本文档示例中使用管理网卡eth0 IP,生产中可使用另外单独的网卡

physical_interface_mappings需要在`provider:`后输入连接到provider网络的业务网卡名。在本文档示例中为eth1

在对应的域修改l3_agent配置文件/etc/neutron/l3_agent.ini

#vim /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver=linuxbridge

在对应的域修改dhcp_agent配置文件/etc/neutron/dhcp_agent.ini

# vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver=linuxbridge

dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata=true

在对应的域修改metadata_agent配置文件/etc/neutron/metadata_agent.ini

此处000000为数据库密码

# vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_host=controller

metadata_proxy_shared_secret=000000

在对应的域修改nova配置文件/etc/nova/nova.conf

#vim /etc/nova/nova.conf

[neutron]

url = http://controller:9696

auth_url=http://controller:35357

auth_type=password

project_domain_name=default

user_domain_name=default

region_name=RegionOne

project_name=service

username=neutron

password=000000

service_metadata_proxy=true

metadata_proxy_shared_secret=000000

配置网桥过滤器

# echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf

# echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf

# cat /etc/sysctl.conf

# modprobe br_netfilter

# sysctl -p

# sed -i '$amodprobe br_netfilter' /etc/rc.local

禁用IPV6

# echo net.ipv6.conf.all.disable_ipv6 = 1 >> /etc/sysctl.conf

# echo net.ipv6.conf.default.disable_ipv6 = 1 >> /etc/sysctl.conf

# sysctl -p

创建链接

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file \

/etc/neutron/neutron.conf --config-file \

/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启Nova服务

# systemctl restart openstack-nova-api.service

启动Neutron服务

# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

3.6 计算节点

安装软件包

# yum install openstack-neutron-linuxbridge ebtables ipset -y

配置Neutron

在对应的域修改neutron配置文件/etc/neutron/neutron.conf

# vim /etc/neutron/neutron.conf

[DEFAULT]

transport_url=rabbit://rabbitmq:000000@controller

auth_strategy=keystone

[keystone_authtoken]

auth_uri=http://controller:5000

auth_url=http://controller:35357

memcached_servers=controller:11211

auth_type=password

project_domain_name=default

user_domain_name=default

project_name=service

username=neutron

password=000000

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

修改linuxbridge配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings=provider:eth1

[vxlan]

enable_vxlan=true

local_ip=10.12.20.XXX

l2_population=true

[securitygroup]

enable_security_group=true

firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

在对应的域修改nova配置文件/etc/nova/nova.conf

#vim /etc/nova/nova.conf

[neutron]

url=http://controller:9696

auth_url=http://controller:35357

auth_type=password

project_domain_name=default

user_domain_name=default

region_name=RegionOne

project_name=service

username=neutron

password=000000

配置网桥过滤器

# echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf

# echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf

# cat /etc/sysctl.conf

# modprobe br_netfilter

# sysctl -p

# sed -i '$amodprobe br_netfilter' /etc/rc.local

禁用IPV6

# echo net.ipv6.conf.all.disable_ipv6 = 1 >> /etc/sysctl.conf

# echo net.ipv6.conf.default.disable_ipv6 = 1 >> /etc/sysctl.conf

# sysctl -p

重启Nova服务

# systemctl restart openstack-nova-compute.service

启动Neutron服务

# systemctl start neutron-linuxbridge-agent.service

# systemctl enable neutron-linuxbridge-agent.service

验证Neutron服务

controller节点执行,注意,注意,注意

# openstack network agent list

3.6 OpenvSwitch模式

在控制节点执行

安装软件包

# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch openvswitch ebtables

配置Neutron

在对应的域修改neutron配置文件/etc/neutron/neutron.conf

# vim /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://rabbitmq:000000@controller

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[database]

connection = mysql+pymysql://neutron:000000@controller/neutron

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 000000

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[nova]#文件内新增nova域

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 000000

修改ml2_conf配置文件/etc/neutron/plugins/ml2/ml2_conf.ini,添加以下域

#vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

修改ovs配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini,添加以下域

#vim /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =10.100.1.110

bridge_mappings = provider:br-ex

[agent]

tunnel_types = vxlan,gre

l2_population = true

arp_responder = true

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

local_ip配置为物理网卡IP。vxlan网络的二层隧道流量将从具有此IP的网卡传输。本文档示例中使用管理网卡eth0 IP,生产中可使用另外单独的网卡。

在对应的域修改l3_agent配置文件/etc/neutron/l3_agent.ini

#vim /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = openvswitch

external_network_bridge = br-ex

在对应的域修改dhcp_agent配置文件/etc/neutron/dhcp_agent.ini

#vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = openvswitch

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

在对应的域修改metadata_agent配置文件/etc/neutron/metadata_agent.ini

#vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_host = controller

metadata_proxy_shared_secret = 000000

在对应的域修改nova配置文件/etc/nova/nova.conf

#vim /etc/nova/nova.conf

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 000000

service_metadata_proxy = true

metadata_proxy_shared_secret = 000000

启动服务

#systemctl enable openvswitch

#systemctl start openvswitch

#systemctl status openvswitch

建立网桥

此处网卡名为连接Provider网络的业务网卡名。

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-exeth1

ovs-vsctl show

创建连接

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file \

/etc/neutron/neutron.conf --config-file \

/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启Nova服务

# systemctl restart openstack-nova-api.service

启动Neutron服务

# systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

# systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

在计算节点执行

安装软件包

# yum install openvswitch openstack-neutron-openvswitch ebtables ipset -y

配置Neutron

在对应的域修改neutron配置文件/etc/neutron/neutron.conf

# vim /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://rabbitmq:000000@controller

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 000000

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

修改ovs配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini,添加以下域

# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip = 10.100.1.20

bridge_mappings = provider:br-ex

[agent]

tunnel_types = vxlan,gre

l2_population = true

arp_responder = true

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

在对应的域修改nova配置文件/etc/nova/nova.conf

# vim /etc/nova/nova.conf

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 000000

启动服务

安装ovs模块,如无pip命令,安装python3-pip

# pip3 install ovs

启动服务

# systemctl enable openvswitch

# systemctl start openvswitch

# systemctl status openvswitch

建立网桥

此处网卡名为连接Provider网络的业务网卡名。

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-exeth1

ovs-vsctl show

重启Nova服务

# systemctl restart openstack-nova-compute.service

# systemctl status openstack-nova-compute.service

启动Neutron服务

# systemctl start neutron-openvswitch-agent.service

# systemctl enable neutron-openvswitch-agent.service

# systemctl status neutron-openvswitch-agent.service

验证Neutron服务

controller节点执行

# openstack network agent list

4. 部署计算节点:

安装Web服务

4.1 安装软件包

controller节点执行

# yum install -y openstack-dashboard

4.2 配置Dashboard

找到对应地方修改dashboard配置文件local_settings,如没有相应字段则添加

#vim /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['*', 'two.example.com']

WEBROOT = '/dashboard/'

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': 'controller:11211',

},

}

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

OPENSTACK_HOST = "controller"

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

"identity": 3,

"image": 2,

"volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

修改openstack-dashboard.conf配置文件,在第3-4行空白处添加如下字段

# vim /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

4.3重启服务

# systemctl restart httpd.service memcached.service

# systemctl status httpd.service memcached.service

4.4访问Web服务

地址:http://10.12.20.XXX/dashboard

域:default

用户:admin

密码:000000

5. 验证创建云主机:

controller节点执行

5.1创建KeyPair

# ssh-keygen -q -N ""

# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

# openstack keypair list

5.2 创建网络

创建provider网络(配置链接外网)

# source ~/admin-openrc

# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

设置provider地址池(ip地址池为创建网桥所用的IP网段)

#openstack subnet create --network provider --allocation-pool start=10.100.1.100,end=10.100.1.200--gateway10.100.1.1--subnet-range10.100.1.0/24provider

创建self-service网络(内网nat网络)

# openstack network create selfservice

设置self-service地址池(可随意设置)

#openstack subnet create --network selfservice --gateway 10.100.2.1 --subnet-range 10.100.2.0/24 selfservice

5.3 创建路由

# openstack router create router

添加self-service网络到router接口

# neutron router-interface-add router selfservice

为router设置网关

# neutron router-gateway-set router provider

验证router

# ip netns

查看网络

# neutron router-port-list router

5.4 创建实例类型

0:实例编号;1:cpu;1024:内存;20:磁盘大小;name:实例类型名称;也可在web页面进行创建

# openstack flavor create --id0--vcpus1--ram1024--disk20name

5.5 查看镜像及网络

# openstack image list

# openstack subnet list

5.6 创建实例

# openstack server create --flavor<flavor-id>--image<image-id> <server-name>--network<network-id>

flavor-id:实例类型ID或者名称;

image-id:镜像ID或者名称;

server-name:实例名称;

network-id:网络ID或者名称

请根据实际情况替换使用

5.7 查看实例

终于搭建起来~ 恭喜一下自己~

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/2/10 0:59:53

基于springboot物流管理系统毕业论文+PPT(附源代码+演示视频)

文章目录基于springboot物流管理系统一、项目简介&#xff08;源代码在文末&#xff09;1.运行视频2.&#x1f680; 项目技术栈3.✅ 环境要求说明4.包含的文件列表&#xff08;含论文&#xff09;数据库结构与测试用例系统功能结构后台运行截图项目部署源码下载基于springboot物…

作者头像 李华
网站建设 2026/2/7 3:02:29

bugku——cookies(cookies欺骗)

打开之后是这样的是一些乱七八糟的字母也没有规律可言但是一眼就注意到了url中的?line&filenamea2V5cy50eHQ后面这个像一个base64&#xff0c;解码看看是keys.txt文件&#xff0c;如果是这样我们就知道了&#xff0c;想要访问某个文件必须是base64编码之后的&#xff0c;那…

作者头像 李华
网站建设 2026/2/12 22:06:22

【JavaWeb】乱码问题_GET请求参数乱码

GET请求乱码 GET请求方式乱码分析 GET方式提交参数的方式是将 编写如下servlet 使用表单方式提交参数 编写index.html 启动tomcat 此时并未出现乱码 如果修改如下编码方式为GBK 可以看到请求行中只有四个字节&#xff08;GBK中&#xff0c;一个汉字对应两个字节&#xff0…

作者头像 李华
网站建设 2026/2/11 15:06:40

节日贺卡设计:LobeChat生成温馨祝福语

节日贺卡设计&#xff1a;用 LobeChat 生成走心祝福语 在每年的节日季&#xff0c;写一张贺卡看似简单&#xff0c;却常常让人卡在第一句——“亲爱的”之后该接什么&#xff1f;是太正式显得生分&#xff0c;还是太随意少了仪式感&#xff1f;我们想要表达的情感很真&#xff…

作者头像 李华
网站建设 2026/2/13 8:10:10

LobeChat展览展示解说词创作

LobeChat&#xff1a;构建下一代AI交互的开源基石 在人工智能浪潮席卷各行各业的今天&#xff0c;大语言模型&#xff08;LLM&#xff09;的能力早已不再神秘。从GPT到Claude&#xff0c;再到各类开源模型&#xff0c;我们手握强大的“大脑”&#xff0c;但真正让这些智能落地、…

作者头像 李华