Note that there's a similar field in the nova database (connection_info):
---snip---
MariaDB [nova]> select connection_info from block_device_mapping where
instance_uuid='bbc33a1d-10c0-47b1-8179-304899c4546c';
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| connection_info
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NULL
|
| {"driver_volume_type": "rbd", "data": {"name":
"volumes/volume-d5d5d059-40be-4904-b75c-12945dedeb99", "hosts":
["IP1", "IP2", "IP3"], "ports": ["6789", "6789", "6789"],
"cluster_name": "ceph", "auth_enabled": true, "auth_username":
"openstack-ec", "secret_type": "ceph", "secret_uuid": "<SECRET>",
"volume_id": "d5d5d059-40be-4904-b75c-12945dedeb99", "discard": true,
"keyring": null, "qos_specs": null, "access_mode": "rw", "encrypted":
false}, "status": "attaching", "instance":
"bbc33a1d-10c0-47b1-8179-304899c4546c", "attached_at":
"2021-07-15T10:59:12.000000", "detached_at": "", "volume_id":
"d5d5d059-40be-4904-b75c-12945dedeb99", "serial":
"d5d5d059-40be-4904-b75c-12945dedeb99"} |
[...]
---snip---
We had a similar migration a few years back but I can't quite recall
if we just had to update the cinder db or both cinder and nova. I'll
check if I have some notes lying around.
Regarding the IP change: simply changing the ceph.conf on the MON
nodes won't be enough because the monmap will still contain the old
IPs. I tried something like that last year in a lab environment with a
cephadm deployed cluster [1]. But I'm not sure if that still works and
I haven't done that yet in a hardware production environment. Last
time I did that was last year with a middle-sized hardware cluster
running on Nautilus, in that case we had to recreate the monmap with
the correct IPs and inject it into the MONs. In the end it worked fine
though, but as I said, this works different with cephadm. The docs [2]
still describe the process for non-containerized environments.
Regards,
Eugen
[1]
https://heiterbiswolkig.blogs.nde.ag/2020/12/18/cephadm-changing-a-monitors-ip-address/
[2]
https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address
Zitat von Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>:
Hi,
On 7/21/21 8:30 PM, Konstantin Shalygin wrote:
Hi,
On 21 Jul 2021, at 10:53, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
<mailto:Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>> wrote:
One client with special needs is openstack cinder. The database
entries contain the mon list for volumes
Another question: do you know where is saved this list? I mean, how
to see the current records via cinder command?
I'm not aware of any method to retrieve this information via command
line (or even update it).
It is stored in the connection_info column in the volume_attachment table:
MariaDB [cinder]> select connection_info from volume_attachment limit 1;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| connection_info |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {"attachment_id": "000e2b0e-66eb-4bc8-a8c0-f815c7bb628d",
"encrypted": false, "driver_volume_type": "rbd", "secret_uuid":
"XXXXXXXXXX", "qos_specs": null, "volume_id":
"df8c89c0-bbf8-4694-bf40-492d5fc703d2", "auth_username": "cinder",
"secret_type": "ceph", "name":
"os-volumes/volume-df8c89c0-bbf8-4694-bf40-492d5fc703d2", "discard":
true, "keyring": null, "cluster_name": "ceph", "auth_enabled": true,
"hosts": ["192.168.15.4", "192.168.15.6", "192.168.15.7"],
"access_mode": "rw", "ports": ["6789", "6789", "6789"]} |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The hosts and ports keys define the mon hosts to be used. This
information is translated into the following block in the libvirt
domain file:
<driver name='qemu' type='raw' cache='none' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='XXXXXXXXXX'/>
</auth>
<source protocol='rbd'
name='os-volumes/volume-550627a1-6b98-49ca-99e6-4289f81e6e97'>
<host name='192.168.15.4' port='6789'/>
<host name='192.168.15.6' port='6789'/>
<host name='192.168.15.7' port='6789'/>
</source>
We recently moved all our cinder volumes to a new cluster. The
process was a little bit more complex than expected, since we were
unable to use the built-in migration in cinder (openstack too old,
qemu too old):
- setup rbd mirroring to the new cluster
- wait until the initial mirror for one image is done
- freeze the instance
- wait until the pending mirroring data is transferred
- manipulate the database to point to the new cluster for that rbd
- failover the rbd image to the target cluster
- wait until the failover is done
- perform a live migration of the instance to another host (creates
a new libvirt configuration with the new mon hosts)
- disable mirroring for the rbd image
- move rbd image to trash on original cluster (in case it needs to
be restored)
- repeat with the next image
Except some test cases and some instances with a more complex setup
(snapshots, multiple volumes on instances, very active or large
volumes, attached GPUs) we did not encounter any problem. But I
would prefer not to do this again, the process is somewhat fragile
and automatic rollback is hard to implement.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx