Re: Is it safe to set multiple OSD out across multiple failure domain?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 12 mars 2025 kl 11:41 skrev Kai Stian Olstad <ceph+list@xxxxxxxxxx>:
>
> Say we have 10 host with 10 OSDs and the failure domain is host.
>
> host0 osd 0 to 9
> host1 osd 10 to 19
> host2 osd 20 to 29
> host3 osd 30 to 39
> host4 osd 40 to 49
> host5 osd 50 to 59
> host6 osd 60 to 69
> host7 osd 70 to 79
> host8 osd 80 to 89
> host9 osd 90 to 99
>
> A pool have EC 4+2 and PG 2.1 has UP and ACTING the following OSDs
> 0,10,20,30,40 and 50.
>
> I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on all 3
> OSDs at the same time with "ceph osd out 0 10 20" or do I need to take one
> after the other out?

It is not safe. You have no control of where ALL the PGs place their
pieces, so if you take out 3 OSDs immediately, by random chance, there
will be one PG for which osd 0,10 and 30 holds a piece for it, and now
EC can't piece together a 4+2 PG, since it only has 3 pieces, where it
needs 4 in order to rebuild the data. Or counted inversely, EC X+2
means at the very most 2 drives can be lost before you lose data. It
will also turn READONLY while rebuilding the two missing PGs, and turn
RW again when at least one piece is recreated, but in terms of "can I
remove 3 drives on EC 4+2" the answer is no.

What you can do is lower the weight of them to 0.0 and ceph will start
move PGs off them into the others, and when they have 0 PGs, you can
purge them and stop the OSD process and remove the disk.

But you must let ceph move the data off the OSDs first.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux