Hi Adrien,
I second Eugen's and Tyle's answers.
Specifically to Manila, I have raise a bug some time ago
(https://bugs.launchpad.net/manila/+bug/1996793), which seems to have
landed in 19.0.0.0rc1.
For previous releases, it is possible to update export_locations by
restarting the manila service (not ideal, though...)
Cheers,
Enrico
On 12/2/24 17:07, Adrien Georget wrote:
Thanks for pointing this out!
I thought that it would be easier to manage on the Openstack side...
I will discuss with the cloud team to find the best way to handle it.
Maybe a live-migration of all VMs in order to refresh MONs IPs and
then some database updates.
Cheers,
Adrien
Le 29/11/2024 à 13:51, Eugen Block a écrit :
Confirming Tyler's description, we had to do lots of database
manipulation in order to get the new IPs into the connection
parameters. Since you already added the new monitors, there's not
much else you can do. But I would have suggested to rather reinstall
the MONs instead of adding new ones as Tyler already stated.
Am 29.11.24 um 13:19 schrieb Tyler Stachecki:
On Fri, Nov 29, 2024, 5:33 AM Adrien Georget
<adrien.georget@xxxxxxxxxxx>
wrote:
Hello,
We are using Ceph as a storage backend for Openstack (Cinder, Nova,
Glance, Manila) and we are replacing old hardware hosting Ceph
monitors
(MON,MGR,MDS) to new ones.
I have already added the new ones in production, monitors successfully
joined the quorum and new MGR/MDS are standby.
For the monitors, I'm sure that the monmap is already up to date and
Openstack clients are already aware of the change and it should not
be a
problem when I will next shut down the old monitors.
The ceph.conf will be updated in all Openstack controllers to replace
"mon host" with the new ones before shutting old mons down.
But I have some doubts with the resilience of Openstack Manila service
because the IP addresses of the monitors look hardcoded in the export
location of the manila share :
The manila show command returns for example :
| export_locations | |
| | path =
134.158.208.140:6789,134.158.208.141:6789,134.158.208.142:6789:/volumes/EC_manila/_nogroup/7a6c05d9-2fea-43b1-a6d4-06eec1e384f2
|
| | share_instance_id =
7a6c05d9-2fea-43b1-a6d4-06eec1e384f2 |
Has anyone already done this kind of migration in the past and can
confirm my doubts?
Is there any process to update shares?
Cheers,
Adrien
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
I can't speak for Manila, but for Cinder/Glance/Nova this is a bit of a
headache. Unfortunately, the mon IPs get hard coded there as well,
both in
the database and in the libvirt XML. Go to any nova-compute node with a
Ceph-backed Cinder volume (or Nova image, including config drives)
attached
to it and run `virsh dumpxml <UUID>` and you'll see it.
Unfortunately, changing all of the mon IPs will result in the situation
where you can neither live-migrate your VMs nor will you be able to
start/hard reboot VMs until volumes are detached and attached with
the new
monitor IPs.
The only way we found around this with zero downtime was to rebuild
_some_
of the ceph-mons with new IPs, and then leverage some custom patches
(which
I can share) that rewrite the libvirt and database info during a
live-migration (so, in essence, we had to live-migrate each VM once in
order to pull this off) with the new set of intended mon IPs (not
the ones
currently in ceph.conf).
If you don't require live-migration or don't use it, you can
probably get
away with just doing some database updates (carefully!). The VMs do
observe
monmap changes at runtime like any other RADOS client - it's only
when you
try to perform control plane actions against them that it becomes a
problem, because it'll use the mon IPs in the database (which are
old) and
not from ceph.conf in that case.
Thanks,
Tyler
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage & Data Management - General Storage Services
Mailbox: G20500 - Office: 31-2-010
1211 Genève 23
Switzerland
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx