Re: min_size vs. K in erasure coded pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can reduce min_size to k in an ec pool. But that's a very bad idea
for the same reason that min_size 1 on a replicated pool is bad.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Feb 20, 2019 at 11:27 AM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> I see that as a security feature ;-)
> You can prevent data loss if k chunks are intact, but you don't want
> to work with the least required amount of chunks. In a disaster
> scenario you can reduce min_size to k temporarily, but the main goal
> should always be to get the OSDs back up.
> For example, in a replicated pool with size 3 we set min_size to 2 not
> to 1, although that would also work if everything is healthy. But it's
> risky since there's also a chance that two corrupt PGs overwrite a
> healthy PG.
>
> Regards,
> Eugen
>
>
> Zitat von "Clausen, Jörn" <jclausen@xxxxxxxxx>:
>
> > Hi!
> >
> > While trying to understand erasure coded pools, I would have
> > expected that "min_size" of a pool is equal to the "K" parameter.
> > But it turns out, that it is always K+1.
> >
> > Isn't the description of erasure coding misleading then? In a K+M
> > setup, I would expect to be good (in the sense of "no service
> > impact"), even if M OSDs are lost. But in reality, my clients would
> > already experience an impact when M-1 OSDs are lost. This means, you
> > should always consider one more spare than you would do in e.g. a
> > classic RAID setup, right?
> >
> > Joern
> >
> > --
> > Jörn Clausen
> > Daten- und Rechenzentrum
> > GEOMAR Helmholtz-Zentrum für Ozeanforschung Kiel
> > Düsternbrookerweg 20
> > 24105 Kiel
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux