Re: Is it safe to set multiple OSD out across multiple failure domain?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> >> > > I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on all 3
> >> > > OSDs at the same time with "ceph osd out 0 10 20" or do I need to take one
> >> > > after the other out?
> >> >
> >> > It is not safe. [...]
> >> > What you can do is lower the weight of them to 0.0 and ceph will start
> >> > move PGs off them into the others, and when they have 0 PGs, you can
> >> > purge them and stop the OSD process and remove the disk.
> >>
> >> Please note that the original post talks about not yanking OSD drives
> >> physically, but about running the "ceph osd out" command. This command
> >> is exactly equivalent to reweighing the OSD to zero.
> >
> >Yes, my bad. I misread the "ceph osd out" as "taking out". You are
> >right, telling ceph via "ceph osd out" or reweighting it to zero is
> >the same, and both works to make the cluster start emptying those
> >OSDs.
>
> The reason I asked is that several months back I got a off list reply from
> a frequent poster on this list, that setting 3 OSDs out at the same time could
> give me incomplete PGs as a result.
>
> But a least now I have 2 saying is OK and 1 saying it's not so thank you
> Alexander and Janne.

Perhaps this person mistook your question same as I did first, since
"removing" or "destroying" or "physically taking out" would all be
bad, whereas telling the cluster to stop using them at its earliest
convenience and waiting a bit (sometimes days) for it to finish is
totally fine. It does depend on how you formulate the "action" verbs I
guess. I wanted to make sure it was understood that given a large set
of PGs, one or more of them will be exactly on those 3 OSDs, so this
PG needs to move away in an orderly fashion, otherwise you will get a
broken pool and clients that sooner or later will have stuck IO.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux