On Sun, Jul 15, 2018 at 12:46 PM Oliver Schulz <oliver.schulz@xxxxxxxxxxxxxx> wrote: > > Dear all, > > we're planning a new Ceph-Clusterm, with CephFS as the > main workload, and would like to use erasure coding to > use the disks more efficiently. Access pattern will > probably be more read- than write-heavy, on average. > > I don't have any practical experience with erasure- > coded pools so far. > > I'd be glad for any hints / recommendations regarding > these questions: > > * Is an SSD cache pool recommended/necessary for > CephFS on an erasure-coded HDD pool (using Ceph > Luminous and BlueStore)? Since Luminous, you can use an erasure coded pool (on bluestore) directly as a CephFS data pool, no cache pool needed. John > * What are good values for k/m for erasure coding in > practice (assuming a cluster of about 300 OSDs), to > make things robust and ease maintenance (ability to > take a few nodes down)? Is k/m = 6/3 a good choice? > > * Will it be sufficient to have k+m racks, resp. failure > domains? > > > Cheers and thanks for any advice, > > Oliver > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com