How to fix 1 pg stale+active+clean of cephfs pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The cause of the stale pg, is a fs_data.r1 1 replica pool. This should 
be empty but ceph df shows 128 KiB used.  

I have already marked the osd as lost and removed the osd from the crush 
map. 


PG_AVAILABILITY Reduced data availability: 1 pg stale
    pg 30.4 is stuck stale for 407878.113092, current state 
stale+active+clean, last acting [31]

[@c01 ~]# ceph pg map 30.4
osdmap e72814 pg 30.4 (30.4) -> up [29] acting [29]

[@c01 ~]# ceph pg 30.4 query
Error ENOENT: i don't have pgid 30.4



-----Original Message-----
To: ceph-users
Subject:  Re: How to fix 1 pg stale+active+clean

 
I had just one osd go down (31), why is ceph not auto-healing in this 
'simple' case?



-----Original Message-----
To: ceph-users
Subject:  How to fix 1 pg stale+active+clean


How to fix 1 pg marked as stale+active+clean

pg 30.4 is stuck stale for 175342.419261, current state 
stale+active+clean, last acting [31]


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux