Re: All pools have size=3 but "MB data" and "MB used" ratio is 1 to 5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I will start now to push a lot of data into the cluster to see if the
> "metadata" grows a lot or stays costant.
>
> There is a way to clean up old metadata ?

I pushed a lot of more data to the cluster. Then I lead the cluster
sleep for the night.

This morning I find this values:

6841 MB data
25814 MB used

that is a bit more of 1 to 3.

It looks like the extra space is in these folders (for N from 1 to 36):

/var/lib/ceph/osd/ceph-N/current/meta/

This "meta" folders have a lot of data in it. I would really be happy
to have pointers to understand what is in there and how to clean that
up eventually.

The problem is that googling for "ceph meta" or "ceph metadata" will
produce results for Ceph MDS that is completely unrelated :(

thanks

Saverio
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux