Re: pgs stuck unclean after growing my ceph-cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks, my warning is gone now.

2013/3/13 Jeff Anderson-Lee <jonah@xxxxxxxxxxxxxxxxx>
On 3/13/2013 9:31 AM, Greg Farnum wrote:
Nope, it's not because you were using the cluster. The "unclean" PGs here are those which are in the "active+remapped" state. That's actually two states — "active" which is good, because it means they're serving reads and writes"; "remapped" which means that for some reason the current set of OSDs handling them isn't the set that CRUSH thinks should be handling them. Given your cluster expansion that probably means that your CRUSH map and rules aren't behaving themselves and are failing to assign the right number of replicas to those PGs. You can check this by looking at the PG dump. If you search for "ceph active remapped" it looks to me like you'll get some useful results; you might also just be able to enable the CRUSH tunables (http://ceph.com/docs/master/rados/operations/crush-map/#tunables).

thanks for the hint in the manual, after the run was successful, should i set the old default values?
 
I experienced this (stuck active+remapped) frequently with the stock 0.41 apt-get/Ubuntu version of ceph. Less so with Bobtail.

i use:
ceph version 0.48.3argonaut (commit:920f82e805efec2cae05b79c155c07df0f3ed5dd)
on ubuntu 12.04 and a 3.8.2-030802-generic Kernel

i can't upgrade to Bobtail until ganeti 2.7 is out.


Again thanks for the help,

Ansgar
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux