Re: Pool Available Capacity Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jay Munsterman <jaymunster@xxxxxxxxx>:
>
> Thanks, great presentation and that explains it. And some interesting ideas on using upmap in different ways.
>
> Our cluster is Luminous. Does anyone know the mapping of ceph client version to CentOS kernel? It looks like Redhat has a knowledge base article on the subject available to customers. Running "ceph features" in our environment would indicate a number of clients on Jewel. I am guessing that is the standard for the CentOS 7.x kernel.
>

Probably, yes. Kernels >= 4.13 should support it but CentOS kernels
often have lots of backports so it might work with an earlier kernel.

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

> jay
>
> On Sat, Dec 8, 2018 at 8:35 AM Stefan Kooman <stefan@xxxxxx> wrote:
>>
>> Jay Munsterman <jaymunster@xxxxxxxxx> schreef op 7 december 2018 21:55:25 CET:
>> >Hey all,
>> >I hope this is a simple question, but I haven't been able to figure it
>> >out.
>> >On one of our clusters there seems to be a disparity between the global
>> >available space and the space available to pools.
>> >
>> >$ ceph df
>> >GLOBAL:
>> >    SIZE      AVAIL     RAW USED     %RAW USED
>> >    1528T      505T        1022T         66.94
>> >POOLS:
>> >    NAME             ID     USED       %USED     MAX AVAIL     OBJECTS
>> >   fs_data          7        678T     85.79          112T     194937779
>> >   fs_metadata      8      62247k         0        57495G         92973
>> >   libvirt_pool     14       495G      0.57        86243G        127313
>> >
>> >The global available space is 505T, the primary pool (fs_data, erasure
>> >code
>> >k=2, m=1) lists 112T available. With 2,1 I would expect there to be
>> >~338T
>> >available (505 x .67). Seems we have a few hundred TB missing.
>> >Thoughts?
>> >Thanks,
>> >jay
>>
>> Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I suggest you check this presentation by Dan van der Ster: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
>>
>> If you are running Ceph Luminous with Luminous only clients: enable upmap for balancing and enable balancer module.
>>
>> Gr. Stefan
>>
>> Hi,
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux