Re: rgw meta pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

My (limited) understanding of this metadata heap pool is that it's an archive of metadata entries and their versions. According to Yehuda, this was intended to support recovery operations by reverting specific metadata objects to a previous version. But nothing has been implemented so far, and I'm not aware of any plans to do so. So these objects are being created, but never read or deleted.

This was discussed in the rgw standup this morning, and we agreed that this archival should be made optional (and default to off), most likely by assigning an empty pool name to the zone's 'metadata_heap' field. I've created a ticket at http://tracker.ceph.com/issues/17256 to track this issue.

Casey


On 09/09/2016 11:01 AM, Warren Wang - ISD wrote:
A little extra context here. Currently the metadata pool looks like it is
on track to exceed the number of objects in the data pool, over time. In a
brand new cluster, we¹re already up to almost 2 million in each pool.

     NAME                          ID     USED      %USED     MAX AVAIL
OBJECTS
     default.rgw.buckets.data      17     3092G      0.86          345T
2013585
     default.rgw.meta              25      743M         0          172T
1975937

We¹re concerned this will be unmanageable over time.

Warren Wang


On 9/9/16, 10:54 AM, "ceph-users on behalf of Pavan Rallabhandi"
<ceph-users-bounces@xxxxxxxxxxxxxx on behalf of
PRallabhandi@xxxxxxxxxxxxxxx> wrote:

Any help on this is much appreciated, am considering to fix this, given
it¹s confirmed an issue unless am missing something obvious.

Thanks,
-Pavan.

On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi"
<ceph-users-bounces@xxxxxxxxxxxxxx on behalf of
PRallabhandi@xxxxxxxxxxxxxxx> wrote:

    Trying it one more time on the users list.
In our clusters running Jewel 10.2.2, I see default.rgw.meta pool
running into large number of objects, potentially to the same range of
objects contained in the data pool.
I understand that the immutable metadata entries are now stored in
this heap pool, but I couldn¹t reason out why the metadata objects are
left in this pool even after the actual bucket/object/user deletions.
The put_entry() promptly seems to be storing the same in the heap
pool
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880,
but I do not see them to be reaped ever. Are they left there for some
reason?
Thanks,
    -Pavan.
_______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential ***
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux