Hi,
On 7/21/21 8:30 PM, Konstantin Shalygin wrote:
Hi,
On 21 Jul 2021, at 10:53, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
<mailto:Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>> wrote:
One client with special needs is openstack cinder. The database
entries contain the mon list for volumes
Another question: do you know where is saved this list? I mean, how to
see the current records via cinder command?
I'm not aware of any method to retrieve this information via command
line (or even update it).
It is stored in the connection_info column in the volume_attachment table:
MariaDB [cinder]> select connection_info from volume_attachment limit 1;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| connection_info |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {"attachment_id": "000e2b0e-66eb-4bc8-a8c0-f815c7bb628d", "encrypted":
false, "driver_volume_type": "rbd", "secret_uuid": "XXXXXXXXXX",
"qos_specs": null, "volume_id": "df8c89c0-bbf8-4694-bf40-492d5fc703d2",
"auth_username": "cinder", "secret_type": "ceph", "name":
"os-volumes/volume-df8c89c0-bbf8-4694-bf40-492d5fc703d2", "discard":
true, "keyring": null, "cluster_name": "ceph", "auth_enabled": true,
"hosts": ["192.168.15.4", "192.168.15.6", "192.168.15.7"],
"access_mode": "rw", "ports": ["6789", "6789", "6789"]} |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The hosts and ports keys define the mon hosts to be used. This
information is translated into the following block in the libvirt domain
file:
<driver name='qemu' type='raw' cache='none' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='XXXXXXXXXX'/>
</auth>
<source protocol='rbd'
name='os-volumes/volume-550627a1-6b98-49ca-99e6-4289f81e6e97'>
<host name='192.168.15.4' port='6789'/>
<host name='192.168.15.6' port='6789'/>
<host name='192.168.15.7' port='6789'/>
</source>
We recently moved all our cinder volumes to a new cluster. The process
was a little bit more complex than expected, since we were unable to use
the built-in migration in cinder (openstack too old, qemu too old):
- setup rbd mirroring to the new cluster
- wait until the initial mirror for one image is done
- freeze the instance
- wait until the pending mirroring data is transferred
- manipulate the database to point to the new cluster for that rbd
- failover the rbd image to the target cluster
- wait until the failover is done
- perform a live migration of the instance to another host (creates a
new libvirt configuration with the new mon hosts)
- disable mirroring for the rbd image
- move rbd image to trash on original cluster (in case it needs to be
restored)
- repeat with the next image
Except some test cases and some instances with a more complex setup
(snapshots, multiple volumes on instances, very active or large volumes,
attached GPUs) we did not encounter any problem. But I would prefer not
to do this again, the process is somewhat fragile and automatic rollback
is hard to implement.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx