Hi,
what are your rulesets for the affected pools? As far as I remember
the orchestrator updates one OSD node at a time, but not multiple OSDs
at once, only one by one. It checks with the "ok-to-stop" command if
an upgrade of that daemon can proceed, so as long as you have host as
failure domain there should be no I/O disruption for clients. Maybe
you have some pools with size = 2 and min_size = 2?
Regards,
Eugen
Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
Hi!
Sometimes when we upgrade our cephadm-managed 16.2.x cluster, cephadm
decides that it's safe to upgrade a bunch of OSDs at a time, as a result
sometimes RBD-backed Openstack VMs appear to get I/O stalls and read-only
filesystems. Is there a way to make cephadm upgrade fewer OSDs at a time,
or perhaps upgrade them one by one? I don't care if that takes a lot more
time, as long as there's no I/O interruption.
I would appreciate any advice.
Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx