Re: Unexpected "out" OSD behaviour

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi!

I've also noticed that behavior and have submitted a patch some time ago that should fix (2):
https://github.com/ceph/ceph/pull/27288

But it may well be that there's more cases where PGs are not discovered on devices that do have them. Just recently a
lot of my data was degraded and then recreated even though it would have been available on a node that had taken very
long to reboot.

What you can do also is to mark your OSD in and then out right away, the data is discovered then. Although with my patch
that shouldn't be necessary any more. Hope this helps you.

Cheers
  -- Jonas


On 22/12/2019 19.48, Oliver Freyermuth wrote:
> Dear Cephers,
> 
> I realized the following behaviour only recently:
> 
> 1. Marking an OSD "out" sets the weight to zero and allows to migrate data away (as long as it is up),
>    i.e. it is still considered as a "source" and nothing goes to degraded state (so far, everything expected). 
> 2. Restarting an "out" OSD, however, means it will come back with "0 pgs", and if data was not fully migrated away yet,
>    it means the PGs which were still kept on it before will enter degraded state since they now lack a copy / shard.
> 
> Is (2) expected? 
> 
> If so, my understanding that taking an OSD "out" to let the data be migrated away without losing any redundancy is wrong,
> since redundancy will be lost as soon as the "out" OSD is restarted (e.g. due to a crash, node reboot,...) and the only safe options would be:
> 1. Disable the automatic balancer. 
> 2. Either adjust the weights of the OSDs to drain to zero, or use pg upmap to drain them. 
> 3. Reenable the automatic balancer only after having fully drained those OSDs and performing the necessary intervention
>    (in our case, recreating the OSDs with a faster blockdb). 
> 
> Is this correct? 
> 
> Cheers,
> 	Oliver
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux