Stephen,
You are right. Crash can happen if replica size doesn’t match the no of osds. I am not sure if there exists any other solution for your problem " choose first 2 replicas from a rack and choose third replica from any other rack different
from one”.
Some different thoughts:
1)If you have 3 racks, you can try for choose 3 racks and chooseleaf 1 host ensuring three separate racks and three replicas
2)Another thought
Take rack1
Chooseleaf firstn 2 type host
Emit
Take rack2
Chooseleaf firstn 1 type host
Emit
This of course restricts first 2 replicas in rack1 and may become unbalanced.(Ensure enough storage in rack1)
Thanks,
Johnu
From: Stephen Jahl <stephenjahl@xxxxxxxxx>
Date: Thursday, October 9, 2014 at 11:11 AM To: Loic Dachary <loic@xxxxxxxxxxx> Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx> Subject: Re: Monitor segfaults when updating the crush map Thanks Loic,
In my case, I actually only have three replicas for my pools -- with this rule, I'm trying to ensure that at OSDs in at least two racks are selected. Since the replica size is only 3, I think I'm still affected by the bug (unless of course I set my replica
size to 4).
Is there a better way I can express what I want in the crush rule, preferably in a way not hit by that bug ;) ? Is there an ETA on when that bugfix might land in firefly?
Best,
-Steve
On Thu, Oct 9, 2014 at 1:59 PM, Loic Dachary
<loic@xxxxxxxxxxx> wrote:
Hi Stephen, |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com