Re: EC pool spread evenly across failure domains?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 2, 2018 at 11:35 AM Mark Johnston <mark@xxxxxxxxxxxxxxxxxx> wrote:
>
> I have the following setup in a test cluster:
>
>  -1       8.49591 root default
> -15       2.83197     chassis vm1
>  -3       1.41599         host ceph01
>   0   ssd 1.41599             osd.0
>  -5       1.41599         host ceph02
>   1   ssd 1.41599             osd.1
> -19       2.83197     chassis vm2
>  -7       1.41599         host ceph03
>   2   ssd 1.41599             osd.2
>  -9       1.41599         host ceph04
>   3   ssd 1.41599             osd.3
> -20       2.83197     chassis vm3
> -11       1.41599         host ceph05
>   4   ssd 1.41599             osd.4
> -13       1.41599         host ceph06
>   5   ssd 1.41599             osd.5
>
> I created an EC pool with k=4 m=2 and crush-failure-domain=chassis.  The PGs
> are stuck in creating+incomplete with only 3 assigned OSDs each.  I'm assuming
> this is because using crush-failure-domain=chassis requires a different chassis
> for every chunk.
>
> I don't want to switch to k=2 m=1 because I want to be able to survive two OSD
> failures, and I don't want to use crush-failure-domain=host because I don't want
> more than two chunks to be placed on the same chassis.  (The production cluster
> will have more than two hosts per chassis, so crush-failure-domain=host could
> put all 6 chunks on the same chassis.)
>
> Do I need to write a custom CRUSH rule to get this to happen?  Or have I missed
> something?
your hierarchy only includes "3 chassis" with 2 nodes each,  so the
max k+m configuration for "chassis" failure domain
will be 3,  you can either change the failure domain to "rack" and
move one node each in its own rack so that you will
have 6 racks or you can create addition chassis and move around the
nodes, you dont have to edit the crushmap for
that "ceph osd crush" with add-bucket/move/remove etc should help you
achieve that ( http://docs.ceph.com/docs/master/rados/operations/crush-map/
)
eg:
ceph osd crush add-bucket rack1 rack
sudo ceph crush move node1 rack=rack1

>
> Thanks,
> Mark
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux