On Mon, Jul 16, 2018 at 1:25 AM John Spray <jspray@xxxxxxxxxx> wrote:
On Sun, Jul 15, 2018 at 12:46 PM Oliver Schulz
<oliver.schulz@xxxxxxxxxxxxxx> wrote:
>
> Dear all,
>
> we're planning a new Ceph-Clusterm, with CephFS as the
> main workload, and would like to use erasure coding to
> use the disks more efficiently. Access pattern will
> probably be more read- than write-heavy, on average.
>
> I don't have any practical experience with erasure-
> coded pools so far.
>
> I'd be glad for any hints / recommendations regarding
> these questions:
>
> * Is an SSD cache pool recommended/necessary for
> CephFS on an erasure-coded HDD pool (using Ceph
> Luminous and BlueStore)?
Since Luminous, you can use an erasure coded pool (on bluestore)
directly as a CephFS data pool, no cache pool needed.
More than that, we'd really prefer you didn't use cache pools for anything. Just Say No. :)
-Greg
John
> * What are good values for k/m for erasure coding in
> practice (assuming a cluster of about 300 OSDs), to
> make things robust and ease maintenance (ability to
> take a few nodes down)? Is k/m = 6/3 a good choice?
That will depend on your file sizes, IO patterns, and expected durability needs. I think 6+3 is a common one but I don't deal with many deployments.
>
> * Will it be sufficient to have k+m racks, resp. failure
> domains?
Generally, if you want CRUSH to select X "buckets" at any level, it's good to have at least X+1 choices for it to prevent mapping failures. But you could also do workaround like letting it choose (K+M)/2 racks and putting two shards in each rack.
-Greg
>
>
> Cheers and thanks for any advice,
>
> Oliver
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com