Re: PGs stuck unclean "active+remapped" after an osd marked out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I was always in the same situation: I couldn't remove an OSD without
have some PGs definitely stuck to the "active+remapped" state.

But I remembered I read on IRC that, before to mark out an OSD, it
could be sometimes a good idea to reweight it to 0. So, instead of
doing [1]:

    ceph osd out 3

I have tried [2]:

    ceph osd crush reweight osd.3 0 # waiting for the rebalancing...
    ceph osd out 3

and it worked. Then I could remove my osd with the online documentation:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual

Now, the osd is removed and my cluster is HEALTH_OK. \o/

Now, my question is: why my cluster was definitely stuck to "active+remapped"
with [1] but was not with [2]? Personally, I have absolutely no explanation.
If you have an explanation, I'd love to know it. 

Should the "reweight" command be present in the online documentation?
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
If yes, I can make a pull request on the doc with pleasure. ;)

Regards.

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux