Re: OSD based ec-code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah understand this point as well. So you keep 3 nodes as kind of 'spare' for data rebalance. Do you use the space of them or you maximize the usage on a 11 cluster level?

Why I'm asking I have a cluster with 6 hosts on a 4:2 ec pool, I'm planning to add 1 more node but as a spare only, so the space I'll not use  if host dies al the data can be rebalanced. Not sure to be honest what is the correct node number for a 4:2 ec code. At least I guess 7 on a host based ec.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: David Orman <ormandj@xxxxxxxxxxxx> 
Sent: Tuesday, September 14, 2021 8:55 PM
To: Eugen Block <eblock@xxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject:  Re: OSD based ec-code

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Keep in mind performance, as well. Once you start getting into higher 'k' values with EC, you've got a lot more drives involved that need to return completions for operations, and on rotational drives this becomes especially painful. We use 8+3 for a lot of our purposes, as it's a good balance of efficiency, durability (number of complete host failures we can tolerate), and enough performance. It's definitely significantly slower than something like 4+2 or 3x replicated, though.
It also means we don't deploy clusters below 14 hosts, so we can tolerate multiple host failures _and still accept writes_. It never fails that you have a host issue, and while working on that, another host dies. Same lessons many learn with RAIDs with single drive redundancy - lose a drive, start a rebuild, another drive fails and data gone. It's almost always the correct response to err on the side of durability when it comes to these decisions, unless the data is unimportant and maximum performance is required.

On Tue, Sep 14, 2021 at 8:20 AM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> consider yourself lucky that you haven't had a host failure. But I 
> would not draw the wrong conclusions here and change the 
> failure-domain based on luck.
> In our production cluster we have an EC pool for archive purposes, it 
> all went well for quite some time and last Sunday one of the hosts 
> suddenly failed, we're still investigating the root cause. Our 
> failure-domain is host and I'm glad that we chose a suitable EC 
> profile for that, the cluster is healthy.
>
> > Also what is the "optimal" like 12:3 or ?
>
> You should evaluate that the other way around. What are your specific 
> requirements regarding resiliency (how many hosts can fail at the same 
> time without data loss)? How many hosts are available? Are you 
> planning to expand in the near future? Based on this evaluation you 
> can conclude a few options and choose the best for your requirements.
>
> Regards,
> Eugen
>
>
> Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
>
> > Hi,
> >
> > What's your take on an osd based ec-code setup? I've never been 
> > brave enough to use OSD based crush rule because scared host failure 
> > but in the last 4 years we have never had any host issue so I'm 
> > thinking to change to there and use some more cost effective EC.
> >
> > Also what is the "optimal" like 12:3 or ?
> >
> > Thank you
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> > email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux