Re: OMAP size on disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does anyone have a good blog entry or explanation of bucket sharding
requirements/commands?  Plus perhaps a howto?  

I upgraded our cluster to Luminous and now I have a warning about 5 large
objects.  The official blog says that sharding is turned on by default but
we upgraded, so I cant quite tell if our existing buckets had sharding
turned on during the upgrade or if that is something I need to do after(blog
doesn't state that).  Also, when I looked into the sharding commands, they
wanted a shard size, which if its automated, why would need to provide that?
Not to mention I don't know what to start with...

I found this:  https://tracker.ceph.com/issues/24457 which talks about the
issue and the #14 says he worked through it, but information seems outside
of my googlefu.

-Brent

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Matt Benjamin
Sent: Tuesday, October 9, 2018 7:28 AM
To: Luis Periquito <periquito@xxxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  OMAP size on disk

Hi Luis,

There are currently open issues with space reclamation after dynamic bucket
index resharding, esp. http://tracker.ceph.com/issues/34307

Changes are being worked on to address this, and to permit administratively
reclaiming space.

Matt

On Tue, Oct 9, 2018 at 5:50 AM, Luis Periquito <periquito@xxxxxxxxx> wrote:
> Hi all,
>
> I have several clusters, all running Luminous (12.2.7) proving S3 
> interface. All of them have enabled dynamic resharding and is working.
>
> One of the newer clusters is starting to give warnings on the used 
> space for the OMAP directory. The default.rgw.buckets.index pool is 
> replicated with 3x copies of the data.
>
> I created a new crush ruleset to only use a few well known SSDs, and 
> the OMAP directory size changed as expected: if I set the OSD as out 
> and them tell to compact, the size of the OMAP will shrink. If I set 
> the OSD as in the OMAP will grow to its previous state. And while the 
> backfill is going we get loads of key recoveries.
>
> Total physical space for OMAP in the OSDs that have them is ~1TB, so 
> given a 3x replica ~330G before replication.
>
> The data size for the default.rgw.buckets.data is just under 300G.
> There is one bucket who has ~1.7M objects and 22 shards.
>
> After deleting that bucket the size of the database didn't change - 
> even after running gc process and telling the OSD to compact its 
> database.
>
> This is not happening in older clusters, i.e created with hammer.
> Could this be a bug?
>
> I looked at getting all the OMAP keys and sizes
> (https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add 
> up to close the value I expected them to take, looking at the physical 
> storage.
>
> Any ideas where to look next?
>
> thanks for all the help.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux