Thanks, great presentation and that explains it. And some interesting ideas on using upmap in different ways.
Our cluster is Luminous. Does anyone know the mapping of ceph client version to CentOS kernel? It looks like Redhat has a knowledge base article on the subject available to customers. Running "ceph features" in our environment would indicate a number of clients on Jewel. I am guessing that is the standard for the CentOS 7.x kernel.
jay
On Sat, Dec 8, 2018 at 8:35 AM Stefan Kooman <stefan@xxxxxx> wrote:
Jay Munsterman <jaymunster@xxxxxxxxx> schreef op 7 december 2018 21:55:25 CET:
>Hey all,
>I hope this is a simple question, but I haven't been able to figure it
>out.
>On one of our clusters there seems to be a disparity between the global
>available space and the space available to pools.
>
>$ ceph df
>GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 1528T 505T 1022T 66.94
>POOLS:
> NAME ID USED %USED MAX AVAIL OBJECTS
> fs_data 7 678T 85.79 112T 194937779
> fs_metadata 8 62247k 0 57495G 92973
> libvirt_pool 14 495G 0.57 86243G 127313
>
>The global available space is 505T, the primary pool (fs_data, erasure
>code
>k=2, m=1) lists 112T available. With 2,1 I would expect there to be
>~338T
>available (505 x .67). Seems we have a few hundred TB missing.
>Thoughts?
>Thanks,
>jay
Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I suggest you check this presentation by Dan van der Ster: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
If you are running Ceph Luminous with Luminous only clients: enable upmap for balancing and enable balancer module.
Gr. Stefan
Hi,
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com