OSD crash during removal of degraded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

My dev pool was broken by the power outage and I decided to remove it,
since recovery procedures did not succeed at the moment.

Here is the health detail, and immediately after rmpool execution osd8
crashed with an attached
backtrace(
f26f7a39021dbf440c28d6375222e21c94fe8e5c).

HEALTH_ERR 1 pgs inconsistent; 3 pgs recovering; 3 pgs stuck unclean;
recovery -2/96876 degraded (-0.002%); 3/48438 unfound (0.006%); 1
scrub errors
pg 5.0 is stuck unclean for 77536.919759, current state
active+recovering, last acting [4,7]
pg 5.fa6 is stuck unclean for 77548.218768, current state
active+recovering, last acting [8,4]
pg 5.4fb is stuck unclean for 77527.477216, current state
active+recovering, last acting [8,7]
pg 5.fa6 is active+recovering, acting [8,4], 1 unfound
pg 5.4fb is active+recovering, acting [8,7], 1 unfound
pg 5.168 is active+clean+inconsistent, acting [4,6]
pg 5.0 is active+recovering, acting [4,7], 1 unfound
recovery -2/96876 degraded (-0.002%); 3/48438 unfound (0.006%)
1 scrub errors

Attachment: osd.crash.txt.gz
Description: GNU Zip compressed data

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux