Re: CephFS with erasure coding, do I need a cache-pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

On 17.07.2018 03:01, Gregory Farnum wrote:
    Since Luminous, you can use an erasure coded pool (on bluestore)
    directly as a CephFS data pool, no cache pool needed.
More than that, we'd really prefer you didn't use cache pools for anything. Just Say No. :)

Thanks for the confirmation - I'll happily go
without a cache pool, then. :-)


     > * Will it be sufficient to have k+m racks, resp. failure
     >    domains?


Generally, if you want CRUSH to select X "buckets" at any level, it's good to have at least X+1 choices for it to prevent mapping failures.

So for k/m = 6/3, it would make sense to have 10 racks,
and to deploy OSD nodes in multiples of 10, accordingly?


But you could also do workaround like letting it choose (K+M)/2 racks and putting two shards in each rack.

I probably have this wrong - wouldn't it reduce durability
to put two shards in one failure domain?


Thanks for the advice!

Oliver
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux