Re: All pools have size=3 but "MB data" and "MB used" ratio is 1 to 5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph has per-pg and per-OSD metadata overhead. You currently have 26000 PGs, suitable for use on a cluster of the order of 260 OSDs. You have placed almost 7GB of data into it (21GB replicated) and have about 7GB of additional overhead.

You might try putting a suitable amount of data into the cluster before worrying about the ratio of space used to data stored. :)
-Greg
On Fri, Mar 27, 2015 at 3:26 AM Saverio Proto <zioproto@xxxxxxxxx> wrote:
> I will start now to push a lot of data into the cluster to see if the
> "metadata" grows a lot or stays costant.
>
> There is a way to clean up old metadata ?

I pushed a lot of more data to the cluster. Then I lead the cluster
sleep for the night.

This morning I find this values:

6841 MB data
25814 MB used

that is a bit more of 1 to 3.

It looks like the extra space is in these folders (for N from 1 to 36):

/var/lib/ceph/osd/ceph-N/current/meta/

This "meta" folders have a lot of data in it. I would really be happy
to have pointers to understand what is in there and how to clean that
up eventually.

The problem is that googling for "ceph meta" or "ceph metadata" will
produce results for Ceph MDS that is completely unrelated :(

thanks

Saverio
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux