Hi Greg,
On 17.07.2018 03:01, Gregory Farnum wrote:
Since Luminous, you can use an erasure coded pool (on bluestore)
directly as a CephFS data pool, no cache pool needed.
More than that, we'd really prefer you didn't use cache pools for
anything. Just Say No. :)
Thanks for the confirmation - I'll happily go
without a cache pool, then. :-)
> * Will it be sufficient to have k+m racks, resp. failure
> domains?
Generally, if you want CRUSH to select X "buckets" at any level, it's
good to have at least X+1 choices for it to prevent mapping failures.
So for k/m = 6/3, it would make sense to have 10 racks,
and to deploy OSD nodes in multiples of 10, accordingly?
But you could also do workaround like letting it choose (K+M)/2 racks
and putting two shards in each rack.
I probably have this wrong - wouldn't it reduce durability
to put two shards in one failure domain?
Thanks for the advice!
Oliver
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com