openstack 对接ceph 存储
openstack 对接ceph 存储
前置条件有一套ceph 集群和openstack 集群
ceph 集群
主机名 | ip | 集群 |
---|---|---|
node1 | 192.168.41.15 | ceph |
node2 | 192.168.41.25 | ceph |
node3 | 192.168.41.35 | ceph |
root@node1:~# ceph -s
cluster:
id: 685a4bd2-47a1-11ee-840d-f353a63ff242
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 2m)
mgr: node1.qpowdn(active, since 112s), standbys: node2.xqhlzn
osd: 6 osds: 6 up (since 119s), 6 in (since 5m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 34 MiB used, 600 GiB / 600 GiB avail
pgs: 1 active+clean
root@node1:~#
openstack 集群
主机 | ip | 集群 |
---|---|---|
controller01 | 192.168.41.10 | openstack |
computer01 | 192.168.41.20 | openstack |
computer02 | 192.168.41.30 | openstack |
root@controller01:~# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------+
| 03e406f6d9574bb69b440430ee0273d2 | RegionOne | keystone | identity | True | public | http://www.controller01.org:5000/v3/ |
| 0bf6e4da776e4020a5c81aefc96eda25 | RegionOne | glance | image | True | public | http://www.controller01.org:9292 |
| 1bf5d9e3ba6446a8908f0fb6f3d105bd | RegionOne | glance | image | True | admin | http://www.controller01.org:9292 |
| 27e8986af90b4dc184b8f435140ccb4e | RegionOne | neutron | network | True | internal | http://www.controller01.org:9696 |
| 301bc4436b3444109181ac8ccf0d7443 | RegionOne | placement | placement | True | internal | http://www.controller01.org:8778 |
| 43463f0000034c32aca4f81071f06bde | RegionOne | cinderv3 | volumev3 | True | admin | http://www.controller01.org:8776/v3/%(project_id)s |
| 46165b090ec8424d9ac4e2c5ef162d00 | RegionOne | placement | placement | True | admin | http://www.controller01.org:8778 |
| 5800ce77452748a985dc6bf9995aa728 | RegionOne | placement | placement | True | public | http://www.controller01.org:8778 |
| 58977689cab14fcfb5e5ec7b547379df | RegionOne | neutron | network | True | public | http://www.controller01.org:9696 |
| 66c0fba816024c4e8a06df2a06a44691 | RegionOne | neutron | network | True | admin | http://www.controller01.org:9696 |
| 9c0146900c7944cea5cb32240546fb2f | RegionOne | keystone | identity | True | admin | http://www.controller01.org:5000/v3/ |
| 9e902bd254f4478db1cff8e7ec0fc423 | RegionOne | glance | image | True | internal | http://www.controller01.org:9292 |
| abd59e153a9b4d22b6e216eccab55532 | RegionOne | keystone | identity | True | internal | http://www.controller01.org:5000/v3/ |
| b71504ce04384d2296802713cae5254c | RegionOne | cinderv3 | volumev3 | True | internal | http://www.controller01.org:8776/v3/%(project_id)s |
| c0b1d5454d784d27a8ad3b6dc6131870 | RegionOne | nova | compute | True | public | http://www.controller01.org:8774/v2.1 |
| d0f96b07fe744383b655f25309e2a94e | RegionOne | nova | compute | True | internal | http://www.controller01.org:8774/v2.1 |
| f6a0e7b9dd0841c9b227956c602c43f6 | RegionOne | nova | compute | True | admin | http://www.controller01.org:8774/v2.1 |
| fc6f702525164b18a4e03ad2c895956a | RegionOne | cinderv3 | volumev3 | True | public | http://www.controller01.org:8776/v3/%(project_id)s |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------+
root@controller01:~#
ceph创建存储池
1,创建glance,volumes,vms,backups 存储池。
root@node1:~# ceph osd pool create glance 16
pool 'glance' created
root@node1:~# ceph osd pool create volumes 16
pool 'volumes' created
root@node1:~# ceph osd pool create vms 16
pool 'vms' created
root@node1:~# ceph osd pool create backups 16
pool 'backups' created
root@node1:/etc/ceph# ceph osd pool application enable glance rbd
root@node1:/etc/ceph# ceph osd pool application enable volumes rbd
root@node1:/etc/ceph# ceph osd pool application enable vms rbd
root@node1:/etc/ceph# ceph osd pool application enable backups rbd
root@node1:/etc/ceph# ceph osd pool stats
pool .mgr id 1
nothing is going on
pool glance id 2
nothing is going on
pool volumes id 3
nothing is going on
pool vms id 4
nothing is going on
pool backups id 5
nothing is going on
创建用户
cephx 认证:对于不知道这个权限赋予的可以看下以前我写的cephx,
1,创建ceph.glance,ceph.cinder,ceph.backup用户密钥文件
root@node1:/etc/ceph# ceph auth get-or-create client.glance mon "allow r" osd "allow class-read object_prefix rbd_children,allow rwx pool=glance" |tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQCXhP5kakCsFxAAAHz/AO4PH3ZsOv8SXb9xbA==
root@node1:/etc/ceph# ceph auth get-or-create client.cinder mon "allow r" osd "allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=glance" |tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQDJh/5k5XhTIRAAfYRvArSw/2sYWW6sa2scEQ==
root@node1:/etc/ceph# ceph auth get-or-create client.backup mon "profile rbd" osd "profile rbd pool=backups" |tee /etc/ceph/ceph.client.backup.keyring
[client.backup]
key = AQA7xv5kM8oAChAA07pHo/hQmIFz7lMLROr/4w==
拷贝密钥
1,拷贝glance,和cinder 密钥还有集群认证配置文件至controller01控制节点 前置所有openstack 节点创建/etc/ceph/ 目录
root@node1:/etc/ceph# scp ceph.client.cinder.keyring controller01.org:/etc/ceph/
root@controller01.org's password:
ceph.client.cinder.keyring 100% 64 88.8KB/s 00:00
root@node1:/etc/ceph# scp ceph.client.glance.keyring controller01.org:/etc/ceph/
root@controller01.org's password:
ceph.client.glance.keyring 100% 64 66.5KB/s 00:00
root@node1:/etc/ceph# scp ceph.conf controller01.org:/etc/ceph/
root@controller01.org's password:
ceph.conf 100% 277 306.6KB/s 00:00
root@controller01:/etc/ceph# ls
ceph.client.cinder.keyring ceph.client.glance.keyring ceph.conf
2,拷贝cinder 和backup密钥还有集群认证配置文件至computer01,computer02计算节点
computer01
root@node1:/etc/ceph# scp ceph.client.backup.keyring ceph.client.cinder.keyring ceph.conf computer01.org:/etc/ceph/
root@computer01:~# ls /etc/ceph/
ceph.client.backup.keyring ceph.client.cinder.keyring ceph.conf
computer02
root@node1:/etc/ceph# scp ceph.client.backup.keyring ceph.client.cinder.keyring ceph.conf computer02.org:/etc/ceph/
root@computer02:~# ls /etc/ceph/
ceph.client.backup.keyring ceph.client.cinder.keyring ceph.conf
计算节点添加libvirt密钥
如有多个计算节点需要保证uuid一致
computer01
root@computer01:~# cd /etc/ceph/
root@computer01:/etc/ceph# uuidgen
5dd9d4bd-fb51-4f71-ac8b-c59b41c59355
root@computer01:/etc/ceph# cat >> secret.xml << EOF
<secret ephemeral='no' private='no'>
<uuid>5dd9d4bd-fb51-4f71-ac8b-c59b41c59355</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
root@computer01:/etc/ceph# UUID=5dd9d4bd-fb51-4f71-ac8b-c59b41c59355
root@computer01:/etc/ceph# virsh secret-define --file secret.xml
Secret 5dd9d4bd-fb51-4f71-ac8b-c59b41c59355 created
root@computer01:/etc/ceph# virsh secret-list
UUID Usage
-------------------------------------------------------------------
5dd9d4bd-fb51-4f71-ac8b-c59b41c59355 ceph client.cinder secret
root@computer01:/etc/ceph# virsh secret-set-value --secret ${UUID} --base64 $(cat ceph.client.cinder.keyring | grep key | awk -F ' ' '{print $3}')
error: Passing secret value as command-line argument is insecure!
Secret value set
computer02:
root@computer02:~# UUID=5dd9d4bd-fb51-4f71-ac8b-c59b41c59355
root@computer02:~# cd /etc/ceph/
root@computer02:/etc/ceph# cat >> secret.xml << EOF
<secret ephemeral='no' private='no'>
<uuid>$UUID</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
root@computer02:/etc/ceph# cat secret.xml
<secret ephemeral='no' private='no'>
<uuid>5dd9d4bd-fb51-4f71-ac8b-c59b41c59355</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
root@computer02:/etc/ceph# virsh secret-define --file secret.xml
Secret 5dd9d4bd-fb51-4f71-ac8b-c59b41c59355 created
root@computer02:/etc/ceph# cat ceph.client.cinder.keyring
[client.cinder]
key = AQDJh/5k5XhTIRAAfYRvArSw/2sYWW6sa2scEQ==
root@computer02:/etc/ceph# virsh secret-set-value --secret ${UUID} --base64 $(cat ceph.client.cinder.keyring | grep key | awk -F ' ' '{print $3}')
root@computer02:/etc/ceph# virsh secret-list
UUID Usage
-------------------------------------------------------------------
5dd9d4bd-fb51-4f71-ac8b-c59b41c59355 ceph client.cinder secret
所有openstack 节点安装ceph客户端:
使其能调用ceph 存储
apt install ceph-common -y
配置glance用ceph做为后端存储
更改cephx文件权限为glance
root@controller01:/etc/ceph# chown glance:glance ceph.client.glance.keyring
修改配置文件使其调用ceph中的glance 池
root@controller01:/etc/glance# vi glance-api.conf
[glance_store]
stores = rbd,file,http
default_store = rbd
rbd_store_pool = glance
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/
重启glance-api服务:查看日志是否报错
root@controller01:/etc/glance# service glance-api restart
root@controller01:/etc/glance# tail -f /var/log/glance/glance-api.log
上传镜像测试:
root@controller01:~# openstack image create cirros-ceph --disk-format qcow2 --file cirros-0.4.0-x86_64-disk.img
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare |
| created_at | 2023-09-11T06:50:21Z |
| disk_format | qcow2 |
| file | /v2/images/114613ab-bcfc-488f-a850-357afe58ffdd/file |
| id | 114613ab-bcfc-488f-a850-357afe58ffdd |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-ceph |
| owner | 63a4d45b00424d4790068030c865a3ae |
| properties | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/cirros-ceph', owner_specified.openstack.sha256='' |
| protected | False |
| schema | /v2/schemas/image |
| status | queued |
| tags | |
| updated_at | 2023-09-11T06:50:21Z |
| visibility | shared |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
root@controller01:~# openstack image list
+--------------------------------------+-------------+--------+
| ID | Name | Status |
+--------------------------------------+-------------+--------+
| 114613ab-bcfc-488f-a850-357afe58ffdd | cirros-ceph | active |
| f2024e7f-12e9-49d1-87c1-589a73370893 | cirros-init | active |
+--------------------------------------+-------------+--------+
root@controller01:~#
ceph 集群查看glance池中的对象: 仔细看id 为一致。
root@node1:/etc/ceph# rbd ls --pool glance
114613ab-bcfc-488f-a850-357afe58ffdd
root@node1:/etc/ceph# rados -p glance ls
rbd_header.fba867de494b
rbd_object_map.fba867de494b
rbd_directory
rbd_info
rbd_id.114613ab-bcfc-488f-a850-357afe58ffdd
rbd_object_map.fba867de494b.0000000000000004
rbd_data.fba867de494b.0000000000000000
rbd_data.fba867de494b.0000000000000001
root@node1:/etc/ceph#
配置cinder使用ceph做为后端存储
更改cephx 密钥文件client.cinder为cinder用户
root@controller01:/etc/ceph# chown cinder.cinder ceph.client.cinder.keyring
root@computer01:/etc/ceph# chown cinder.cinder ceph.client.cinder.keyring
root@computer02:/etc/ceph# chown cinder.cinder ceph.client.cinder.keyring
控制节点修改cinder 配置文件指定卷类型为ceph
vim /etc/cinder/cinder.conf
[DEFAULT]
default_volume_type = ceph
重启服务检查日志是否报错
root@controller01:/etc/cinder# systemctl restart cinder-scheduler.service
root@controller01:/etc/cinder# tail -f /var/log/cinder/cinder-scheduler.log
修改计算节点配置文件,uuid可使用virsh secert-list 查看
[DEFAULT]
enabled_backends = ceph,lvm
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 5dd9d4bd-fb51-4f71-ac8b-c59b41c59355
volume_backend_name = ceph
计算节点重启服务:
root@computer02:/etc/cinder# systemctl restart cinder-volume.service
root@computer02:/etc/cinder# tail -f /var/log/cinder/cinder-volume.log
控制节点创建卷类型:
root@controller01:~# openstack volume type create ceph
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | None |
| id | d6ea83c7-6289-4316-8d13-d38d941cf3de |
| is_public | True |
| name | ceph |
+-------------+--------------------------------------+
root@controller01:~# cinder --os-username admin --os-tenant-name admin type-key ceph set volume_backend_name=ceph
root@controller01:~# openstack volume type list
+--------------------------------------+-------------+-----------+
| ID | Name | Is Public |
+--------------------------------------+-------------+-----------+
| d6ea83c7-6289-4316-8d13-d38d941cf3de | ceph | True |
| ba112cb7-6a1f-4642-bf3c-33293e5a82de | lvm | True |
| a71f6d68-4aa5-4aa7-8c63-396f425639e0 | __DEFAULT__ | True |
+--------------------------------------+-------------+-----------+
创建一个1G 的测试ceph 是否可用:
root@controller01:~# openstack volume create ceph01 --type ceph --size 1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-09-11T07:22:20.571188 |
| description | None |
| encrypted | False |
| id | 8226863b-a2b7-4fb0-be73-16f5da340e1f |
| migration_status | None |
| multiattach | False |
| name | ceph01 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | ceph |
| updated_at | None |
| user_id | b120ba428f544306bacc7215743a0871 |
+---------------------+--------------------------------------+
root@controller01:~# openstack volume list
+--------------------------------------+--------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+--------+-----------+------+-------------+
| 8226863b-a2b7-4fb0-be73-16f5da340e1f | ceph01 | available | 1 | |
| ca6c3ee6-47c3-4da3-9188-4d3b9a84422d | lvm01 | available | 2 | |
| 8256ec20-48d8-4dfb-a301-fa48bee9c40b | lvm01 | available | 1 | |
| 0cbb9c80-a922-4b76-9d27-e8890d71fc2b | | available | 1 | |
+--------------------------------------+--------+-----------+------+-------------+
root@controller01:~#
ceph 集群查看: 注意这个id 是同ceph集群中一致
root@node1:/etc/ceph# rbd ls --pool volumes
volume-8226863b-a2b7-4fb0-be73-16f5da340e1f
配置卷备份
计算节点
安装软件包:
apt install cinder-backup -y
更改cephx密钥用户权限:
root@computer01:/etc/ceph# chown cinder:cinder ceph.client.backup.keyring
root@computer02:/etc/ceph# chown cinder:cinder ceph.client.backup.keyring
更改配置文件
vim /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = backup
backup_ceph_chunk_size = 4194304
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
重启服务,查看日志是否报错
root@computer01:/etc/ceph# systemctl restart cinder-backup
root@computer01:/etc/ceph# tail -f /var/log/cinder/cinder-backup.log
对ceph0这个卷做备份卷测试
root@controller01:~#
root@controller01:~# openstack volume backup create --name ceph_backup ceph01
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 1c62ac89-fabb-477c-a1e7-3a4d03d02930 |
| name | ceph_backup |
+-------+--------------------------------------+
root@controller01:~#
root@node1:/etc/ceph# rbd ls backups
volume-8226863b-a2b7-4fb0-be73-16f5da340e1f.backup.1c62ac89-fabb-477c-a1e7-3a4d03d02930
nova集群ceph
修改计算节点nova 配置文件
vim /etc/nova/nova.conf
[DEFAULT]
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE" #这是一段热迁移的配置
[libvirt]
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 5dd9d4bd-fb51-4f71-ac8b-c59b41c59355 #virsh secret-list
http://krystism.is-programmer.com/posts/48105.html
安装包,重启服务
apt install -y qemu-block-extra
systemctl restart nova-computer
创建一个实例测试
root@controller01:~# openstack server create --flavor v1-1024-5G --image 114613ab-bcfc-488f-a850-357afe58ffdd --security-group 94f5795e-ef60-4a73-9d93-2322bb295b00 --nic net-id=c09830c2-efe9-4472-9994-1595fd46326b --key-name mykey1 vm2
+-------------------------------------+----------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | X2xiSeiEnsuQ |
| config_drive | |
| created | 2023-09-12T01:36:01Z |
| flavor | v1-1024-5G (b293efa0-1b84-432d-964f-b3b509d8c977) |
| hostId | |
| id | ccbfebd8-e667-4c7d-9ab7-25c6cdefa038 |
| image | cirros-ceph (114613ab-bcfc-488f-a850-357afe58ffdd) |
| key_name | mykey1 |
| name | vm2 |
| progress | 0 |
| project_id | 63a4d45b00424d4790068030c865a3ae |
| properties | |
| security_groups | name='94f5795e-ef60-4a73-9d93-2322bb295b00' |
| status | BUILD |
| updated | 2023-09-12T01:36:01Z |
| user_id | b120ba428f544306bacc7215743a0871 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------+
root@controller01:~# openstack server list
+--------------------------------------+------+--------+----------------------+-------------+------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------+--------+----------------------+-------------+------------+
| ccbfebd8-e667-4c7d-9ab7-25c6cdefa038 | vm2 | ACTIVE | Intnal=166.66.66.107 | cirros-ceph | v1-1024-5G |
+--------------------------------------+------+--------+----------------------+-------------+------------+
验证:
root@node1:/etc/ceph# rbd ls vms
ccbfebd8-e667-4c7d-9ab7-25c6cdefa038_disk
热迁移配置:
计算节点配置libvirtd 监听:
root@computer01:~# vi /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
listen_addr = "192.168.41.20"
auth_tcp = "none"
root@computer02:~# vi /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
listen_addr = "192.168.41.30"
auth_tcp = "none"
开启监听
root@computer01:~# cat /etc/default/libvirtd
# Customizations for the libvirtd.service systemd unit
LIBVIRTD_ARGS="--listen"
重启服务,检查日志是否报错
计算 节点
systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
service libvirtd restart
控制节点重启nova相关的所有服务:
root@controller01:~#
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
测试linvirtd 是否可以从远端配置:
root@computer01:~# virsh -c qemu+tcp://computer02.org/system
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # exit
测试热迁移:已test1,这台机器为例从computer02迁移至computer01 这台节点。业务无影响
root@controller01:~# openstack server list
+--------------------------------------+-------+---------+------------------------------------+-------------+------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------+---------+------------------------------------+-------------+------------+
| fceafdc2-91ca-4e31-9927-f530694826e3 | test1 | ACTIVE | Intnal=166.66.66.159 | cirros-ceph | v1-1024-5G |
| ccbfebd8-e667-4c7d-9ab7-25c6cdefa038 | vm2 | SHUTOFF | Intnal=166.66.66.107, 192.168.5.25 | cirros-ceph | v1-1024-5G |
+--------------------------------------+-------+---------+------------------------------------+-------------+------------+
root@controller01:~# openstack server show fceafdc2-91ca-4e31-9927-f530694826e3
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | computer02.org |
| OS-EXT-SRV-ATTR:hypervisor_hostname | computer02.org |
| OS-EXT-SRV-ATTR:instance_name | instance-0000000a |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2023-09-13T03:28:48.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | Intnal=166.66.66.159 |
| config_drive | |
| created | 2023-09-13T03:28:40Z |
| flavor | v1-1024-5G (b293efa0-1b84-432d-964f-b3b509d8c977) |
| hostId | 92b8979cae9d9150815ca621f992198a4c0094bef370f9452d46a21f |
| id | fceafdc2-91ca-4e31-9927-f530694826e3 |
| image | cirros-ceph (114613ab-bcfc-488f-a850-357afe58ffdd) |
| key_name | mykey1 |
| name | test1 |
| progress | 0 |
| project_id | 63a4d45b00424d4790068030c865a3ae |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2023-09-13T06:16:18Z |
| user_id | b120ba428f544306bacc7215743a0871 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
root@controller01:~# nova live-migration fceafdc2-91ca-4e31-9927-f530694826e3 computer01.org
查看迁移是否成功:
root@controller01:~# openstack server show fceafdc2-91ca-4e31-9927-f530694826e3
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | computer01.org |
| OS-EXT-SRV-ATTR:hypervisor_hostname | computer01.org |
| OS-EXT-SRV-ATTR:instance_name | instance-0000000a |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2023-09-13T03:28:48.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | Intnal=166.66.66.159 |
| config_drive | |
| created | 2023-09-13T03:28:40Z |
| flavor | v1-1024-5G (b293efa0-1b84-432d-964f-b3b509d8c977) |
| hostId | 4bae22d0241aac0e625076d5bf02daeb8bde4c82d84b8de691fb2f55 |
| id | fceafdc2-91ca-4e31-9927-f530694826e3 |
| image | cirros-ceph (114613ab-bcfc-488f-a850-357afe58ffdd) |
| key_name | mykey1 |
| name | test1 |
| progress | 0 |
| project_id | 63a4d45b00424d4790068030c865a3ae |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2023-09-14T01:47:00Z |
| user_id | b120ba428f544306bacc7215743a0871 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
root@controller01:~#
这是我在B 站上面看的一个教程,讲的很不错。。记录一下我的实验过程也当做备忘录。。。
鸣谢:
https://www.cnblogs.com/wsxier/p/16744691.html
https://space.bilibili.com/91303567