cephfs_metadata pool unexpected space utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I observe strange behavior on my Ceph MDS cluster, where cephfs_metadata pool is filling out without obvious reason. It's getting +15% by day even when there are no I/O on the cluster. I have separate SSD disks for metadata pool, each 112G with pool replica size 3.

`ceph fs status` shows only about 1,5G is used by metadata but only 19.9G available:

POOL                       TYPE           USED AVAIL

cephfs_metadata metadata 1446M 19.9G


on other site `ceph osd df` shows that each device is 77% utilized (was 9% 5 days ago) and there are 85G of data:

ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS

119 ssd 0.10918 1.00000 112 GiB 86 GiB 85 GiB 317 MiB 590 MiB 26 GiB 77.09 1.11 128 up

100 ssd 0.10918 1.00000 112 GiB 86 GiB 85 GiB 309 MiB 658 MiB 26 GiB 77.14 1.11 128 up

82 ssd 0.10918 1.00000 112 GiB 86 GiB 85 GiB 393 MiB 617 MiB 26 GiB 77.17 1.11 128 up


`ceph df` output is also a bit different:

CLASS SIZE AVAIL USED RAW USED %RAW USED

ssd 335 GiB 76 GiB 259 GiB 259 GiB 77.20

POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL

cephfs_metadata 2 128 492 MiB 1.78k 1.4 GiB 2.37 20 GiB


I don't know what data are consuming so much space there. This happend after upgrade from nautilus to pacific release. Before upgrade cephfs_metadata utilization was always about 9%.


Any idea what can be wrong and how to fix it?


Thank you

Denis

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux