Metadata pool space usage decreases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I have a weird behavior on my Cephfs.
On May 29th I noticed a drop of 50Tb in my data pool. It has been followed
by a decrease of space usage in the metadata pool since then.
>From May 29th, still happening as I write, the metadata pool has lost 1Tb
over the initial 1.8Tb.
Regarding the number of objects, it was 8.4 Million and is now 7.8 Million.

I assume that my users have deleted a lot a of files on that day (our
cephfs consists a very small files of about 4Mb).

But such a huge decrease in metadata pool got me really concerned, it
really seems huge.

I thought that maybe it was the mds lazy deletion happening but I am not
sure.

Does anyone have any thought about this ?
Do you know more about lazy deletion ? There is not much documentation
about this online.

Do you know any way, command, logfile to search in order to get the current
lazy deletion operations ?

I noticed that the read_ops, read_bytes, write_ops and write_bytes reported
by the command rados df detail are negative for the metadata pool.

My cluster is running Nautilus.

Any help would be appreciated,

Best regards,

Nate
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux