Re: help .--Why the PGS is STUCK UNCLEAN?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 
Anyways its not suggested to have a single node, but if you are having and you must (may be for testing purposes) you can include :

"osd crush chooseleaf type = 0" in global section of ceph.conf and restart all ceph services, to have all pgs in active+clean state.

Thanks and Regards
Ashish Chandra
Cloud Engineer, Reliance Jio


On Thu, Mar 13, 2014 at 2:35 PM, Robert van Leeuwen <Robert.vanLeeuwen@xxxxxxxxxxxxx> wrote:
> The question is that I cannot understand why the status of the PGS is always STUCK UNCLEAN. As I
> see it, the status should be ACTIVE+CLEAN.


It looks like you have one physical node.
If you have a pool with a replication count of 2 (default) I think it wil try to spread the data across 2 failure domains by default.
My guess is the default crush map will see a node as a single failure domain by default.
So, edit the crushmap to allow this or add a second node.

Cheers,
Robert van Leeuwen

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux