Re: Pool Available Capacity Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jay Munsterman <jaymunster@xxxxxxxxx> schreef op 7 december 2018 21:55:25 CET:
>Hey all,
>I hope this is a simple question, but I haven't been able to figure it
>out.
>On one of our clusters there seems to be a disparity between the global
>available space and the space available to pools.
>
>$ ceph df
>GLOBAL:
>    SIZE      AVAIL     RAW USED     %RAW USED
>    1528T      505T        1022T         66.94
>POOLS:
>    NAME             ID     USED       %USED     MAX AVAIL     OBJECTS
>   fs_data          7        678T     85.79          112T     194937779
>   fs_metadata      8      62247k         0        57495G         92973
>   libvirt_pool     14       495G      0.57        86243G        127313
>
>The global available space is 505T, the primary pool (fs_data, erasure
>code
>k=2, m=1) lists 112T available. With 2,1 I would expect there to be
>~338T
>available (505 x .67). Seems we have a few hundred TB missing.
>Thoughts?
>Thanks,
>jay

Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I suggest you check this presentation by Dan van der Ster: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer

If you are running Ceph Luminous with Luminous only clients: enable upmap for balancing and enable balancer module.

Gr. Stefan

Hi,
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux