Re: Does "ceph df" use "bogus" copies factor instead of (k, m) for erasure coded pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, I just realized I didn't answer your original question.

ceph df does take erasure coding settings into account and shows the
correct free space.
However, it also takes the current data distribution into account,
i.e., the amount of data you can write until the first OSD is full
assuming you don't do any rebalancing.
So that's why you sometimes see lower-than-expected values there.


Regarding min_size: yes, you are right about a 2+1 pool being created
with min_size 2 by default in the latest Nautilus release.
This seems like a bug to me, I've opened a ticket here:
http://tracker.ceph.com/issues/39307


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sat, Apr 13, 2019 at 5:18 AM Igor Podlesny <ceph-user@xxxxxxxx> wrote:
>
> And as to min_size choice -- since you've replied exactly to that part
> of mine message only.
>
> On Sat, 13 Apr 2019 at 06:54, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> > On Fri, Apr 12, 2019 at 9:30 PM Igor Podlesny <ceph-user@xxxxxxxx> wrote:
> > > For e. g., an EC pool with default profile (2, 1) has bogus "sizing"
> > > params (size=3, min_size=3).
>
> {{
> > > Min. size 3 is wrong as far as I know and it's been fixed in fresh
> > > releases (but not in Luminous).
> }}
>
> I didn't give any proof when writing this due being more focused on EC
> Pool usage calculation.
> Take a look at:
>
>   https://github.com/ceph/ceph/pull/8008
>
> As it can be seen formula for min_size became min_size = k + min(1, m
> - 1) effectively on March 2019.
> -- That's why I've said "fixed in fresh releases but not in Luminous".
>
> Let's see what does this new formula produce for k=2, m=1 (the default
> and documented EC profile):
>
> min_size = 2 + min(1, 1 - 1) = 2 + 0 = 2.
>
> Before that change it would be 3 instead, thus giving that 3/3 for EC (2, 1).
>
> [...]
> > min_size 3 is the default for that pool, yes. That means your data
> > will be unavailable if any OSD is offline.
> > Reducing min_size to 2 means you are accepting writes when you cannot
> > guarantee durability which will cause problems in the long run.
> > See older discussions about min_size here
>
> Would be glad doing so, but It's not a forum (here), but mail list
> instead, right(?) -- so the only way
> to "see here" is to rely on search engine that might have indexed mail
> list archive. If you have
> specific URL or at least exact keywords allowing to find what you're
> referring to, I'd gladly see
> what you're talking about.
>
> And of course I did search before writing and the fact I wrote it
> anyways means I didn't find
> anything answering my question "here or there".
>
> --
> End of message. Next message?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux