答复: Re: help .--Why the PGS is STUCK UNCLEAN?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi,
   Problem solved by editting crushmap to change the default rule from "step chooseleaf firstn 0 type host" to "step chooseleaf firstn 0 type osd" .

Thanks Ashish and Robert, your reply really helps me a lot.

Thanks again.


# rules
rule data {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        -step chooseleaf firstn 0 type host
       +step chooseleaf firstn 0 type osd
        step emit
}

[root@storage1 ~]# ceph -s
    cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
     health HEALTH_OK
     monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 1, quorum 0 storage1
     osdmap e166: 8 osds: 8 up, 8 in
      pgmap v428: 192 pgs, 3 pools, 10000 bytes data, 1000 objects
            42564 MB used, 13883 GB / 14670 GB avail
                 192 active+clean




Re: [ceph-users] help .--Why the PGS is STUCK UNCLEAN?

Ashish Chandra   收件人: Robert van Leeuwen
2014/03/13 17:21

抄送: "duan.xufeng@xxxxxxxxxx", "ceph-users@xxxxxxxx"






Hi, 
Anyways its not suggested to have a single node, but if you are having and you must (may be for testing purposes) you can include :

"osd crush chooseleaf type = 0" in global section of ceph.conf and restart all ceph services, to have all pgs in active+clean state.

Thanks and Regards
Ashish Chandra
Cloud Engineer, Reliance Jio


On Thu, Mar 13, 2014 at 2:35 PM, Robert van Leeuwen <Robert.vanLeeuwen@xxxxxxxxxxxxx> wrote:
> The question is that I cannot understand why the status of the PGS is always STUCK UNCLEAN. As I
> see it, the status should be ACTIVE+CLEAN.


It looks like you have one physical node.
If you have a pool with a replication count of 2 (default) I think it wil try to spread the data across 2 failure domains by default.
My guess is the default crush map will see a node as a single failure domain by default.
So, edit the crushmap to allow this or add a second node.

Cheers,
Robert van Leeuwen


_______________________________________________
ceph-users mailing list

ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux