Re: Is it safe to set multiple OSD out across multiple failure domain?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 12 mars 2025 kl 17:12 skrev Alexander Patrakov <patrakov@xxxxxxxxx>:
> > >
> > > I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on all 3
> > > OSDs at the same time with "ceph osd out 0 10 20" or do I need to take one
> > > after the other out?
> >
> > It is not safe. [...]
> > What you can do is lower the weight of them to 0.0 and ceph will start
> > move PGs off them into the others, and when they have 0 PGs, you can
> > purge them and stop the OSD process and remove the disk.
>
> Please note that the original post talks about not yanking OSD drives
> physically, but about running the "ceph osd out" command. This command
> is exactly equivalent to reweighing the OSD to zero.

Yes, my bad. I misread the "ceph osd out" as "taking out". You are
right, telling ceph via "ceph osd out" or reweighting it to zero is
the same, and both works to make the cluster start emptying those
OSDs.


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux