Re: Can't start osd- one osd alway be down.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My experience is that once you hit this bug, those PGs are gone.  I tried marking the primary OSD OUT, which caused this problem to move to the new primary OSD.  Luckily for me, my affected PGs were using replication state in the secondary cluster.  I ended up deleting the whole pool and recreating it.

Which pools are 7 and 23?  It's possible that it's something that easy to replace.



On Fri, Oct 24, 2014 at 9:26 PM, Ta Ba Tuan <tuantb@xxxxxxxxxx> wrote:
Hi Craig, Thanks for replying.
When i started that osd, Ceph Log from "ceph -w" warns pgs 7.9d8 23.596, 23.9c6, 23.63 can't recovery as pasted log.

Those pgs are "active+degraded" state.
#ceph pg map 7.9d8
osdmap e102808 pg 7.9d8 (7.9d8) -> up [93,49] acting [93,49]  (When start osd.21 then pg 7.9d8 and three remain pgs  to changed to state "active+recovering") . osd.21 still down after following logs:
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux