Re: how to choose EC plugins and rulesets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Yoann Moulin
> Sent: 10 March 2016 08:38
> To: Nick Fisk <nick@xxxxxxxxxx>; ceph-users@xxxxxxxx
> Subject: Re:  how to choose EC plugins and rulesets
> 
> Le 10/03/2016 09:26, Nick Fisk a écrit :
> > What is your intended use case RBD/FS/RGW? There are no major
> > improvements in Jewel that I am aware of. The big one will be when EC
> > pools allow direct partial overwrites without the use of a cache tier.
> 
> The main goal is for RadosGW. Most of the access will be read only.

In that case, as far as I am aware there are no limitations or gotcha's for
using EC pools, although someone with more experience with RGW might be able
to give you a better idea.

In terms of your original question, you want your k+m ruleset to equal less
than the number of hosts you have. The actual values you choose depend on 

1. Storage overhead you want to choose
2. Resilience you require
3. Performance

The more erasure coded chunks you choose, will increase resilience but will
also increase the percentage of overhead dedicated to resilience.

The higher number of k+m chunks will also require more CPU.

Once a pool is created, you cannot change the ruleset. You need to create a
new pool and then migrate all data.

> 
> We are interested also to use block device and later cephfs but it's not
in our
> priority. And in those cases, we did not discuss about replicate or
erasure yet.
> 
> If you have some insight about this cases, we are also interested.

Yes, RBD on EC pools, only really works in a few secanrios

1. Data is very very cold and hardly gets accessed
2. Your cache tier is big enough to easily hold all hot data
3. You workload is largely read only

Otherwise I would steer clear of using RBD's on ECpools for now, until the
partial write support is added.

> 
> Thnaks,
> 
> Yoann
> 
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> >> Of Yoann Moulin
> >> Sent: 09 March 2016 16:01
> >> To: ceph-users@xxxxxxxx
> >> Subject:  how to choose EC plugins and rulesets
> >>
> >> Hello,
> >>
> >> We are looking for recommendations and guidelines for using erasure
> >> codes
> >> (EC) with Ceph.
> >>
> >> Our setup consists of 25 identical nodes which we dedicate to Ceph.
> >> Each node contains 10 HDDs (full specs below)
> >>
> >> We started with 10 nodes (comprising 100 OSDs) and created a pool
> >> with 3- times replication.
> >>
> >> In order to increase the usable capacity, we would like to go for EC
> > instead of
> >> replication.
> >>
> >> - Can anybody share with us recommendations regarding the choice of
> >> plugins and rulesets?
> >> - In particular, how do we relate to the number of nodes and OSDs?
> >> Any formulas or rules of thumb?
> >> - Is it possible to change rulesets live on a pool?
> >>
> >> We currently use Infernalis but plan to move to Jewel.
> >>
> >> - Are there any improvement in Jewel with regard to erasure codes?
> >>
> >> Looking forward for your answers.
> >>
> >>
> >> =====
> >>
> >> Full specs of nodes
> >>
> >> CPU: 2 x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
> >> Memory: 128GB of Memory
> >> OS Storage: 2 x SSD 240GB Intel S3500 DC (raid 1) Journal Storage: 2
> >> x SSD 400GB Intel S3300 DC (no Raid) OSD Disk: 10 x HGST
> >> ultrastar-7k6000 6TB
> >> Network: 1 x 10Gb/s
> >> OS: Ubuntu 14.04
> >>
> >> --
> >> Yoann Moulin
> >> EPFL IC-IT
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 
> 
> --
> Yoann Moulin
> EPFL IC-IT
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux