Re: CephFS with erasure coding, do I need a cache-pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 15, 2018 at 12:46 PM Oliver Schulz
<oliver.schulz@xxxxxxxxxxxxxx> wrote:
>
> Dear all,
>
> we're planning a new Ceph-Clusterm, with CephFS as the
> main workload, and would like to use erasure coding to
> use the disks more efficiently. Access pattern will
> probably be more read- than write-heavy, on average.
>
> I don't have any practical experience with erasure-
> coded pools so far.
>
> I'd be glad for any hints / recommendations regarding
> these questions:
>
> * Is an SSD cache pool recommended/necessary for
>    CephFS on an erasure-coded HDD pool (using Ceph
>    Luminous and BlueStore)?

Since Luminous, you can use an erasure coded pool (on bluestore)
directly as a CephFS data pool, no cache pool needed.

John

> * What are good values for k/m for erasure coding in
>    practice (assuming a cluster of about 300 OSDs), to
>    make things robust and ease maintenance (ability to
>    take a few nodes down)? Is k/m = 6/3 a good choice?
>
> * Will it be sufficient to have k+m racks, resp. failure
>    domains?
>
>
> Cheers and thanks for any advice,
>
> Oliver
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux