Hello Kai, "ceph osd out" is always safe, as the outed OSD continues to serve PGs that have not yet been migrated away from it. Before stopping osd.123, check that it indeed holds zero PGs: ceph osd safe-to-destroy 123 On Wed, Mar 12, 2025 at 6:41 PM Kai Stian Olstad <ceph+list@xxxxxxxxxx> wrote: > > Say we have 10 host with 10 OSDs and the failure domain is host. > > host0 osd 0 to 9 > host1 osd 10 to 19 > host2 osd 20 to 29 > host3 osd 30 to 39 > host4 osd 40 to 49 > host5 osd 50 to 59 > host6 osd 60 to 69 > host7 osd 70 to 79 > host8 osd 80 to 89 > host9 osd 90 to 99 > > A pool have EC 4+2 and PG 2.1 has UP and ACTING the following OSDs > 0,10,20,30,40 and 50. > > I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on all 3 > OSDs at the same time with "ceph osd out 0 10 20" or do I need to take one > after the other out? > > I would think and hope that Ceph do the right thing and the PG remain > accessible since all OSD is available and data is being copied to other OSDs. > > > -- > Kai Stian Olstad > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx -- Alexander Patrakov _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx