Re: EC pool spread evenly across failure domains?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



step take default
step choose indep 3 chassis
step chooseleaf indep 2 host

which will only work for k+m=6 setups

Paul

Am Di., 2. Okt. 2018 um 20:36 Uhr schrieb Mark Johnston
<mark@xxxxxxxxxxxxxxxxxx>:
>
> I have the following setup in a test cluster:
>
>  -1       8.49591 root default
> -15       2.83197     chassis vm1
>  -3       1.41599         host ceph01
>   0   ssd 1.41599             osd.0
>  -5       1.41599         host ceph02
>   1   ssd 1.41599             osd.1
> -19       2.83197     chassis vm2
>  -7       1.41599         host ceph03
>   2   ssd 1.41599             osd.2
>  -9       1.41599         host ceph04
>   3   ssd 1.41599             osd.3
> -20       2.83197     chassis vm3
> -11       1.41599         host ceph05
>   4   ssd 1.41599             osd.4
> -13       1.41599         host ceph06
>   5   ssd 1.41599             osd.5
>
> I created an EC pool with k=4 m=2 and crush-failure-domain=chassis.  The PGs
> are stuck in creating+incomplete with only 3 assigned OSDs each.  I'm assuming
> this is because using crush-failure-domain=chassis requires a different chassis
> for every chunk.
>
> I don't want to switch to k=2 m=1 because I want to be able to survive two OSD
> failures, and I don't want to use crush-failure-domain=host because I don't want
> more than two chunks to be placed on the same chassis.  (The production cluster
> will have more than two hosts per chassis, so crush-failure-domain=host could
> put all 6 chunks on the same chassis.)
>
> Do I need to write a custom CRUSH rule to get this to happen?  Or have I missed
> something?
>
> Thanks,
> Mark
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux