OMAP size on disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have several clusters, all running Luminous (12.2.7) proving S3
interface. All of them have enabled dynamic resharding and is working.

One of the newer clusters is starting to give warnings on the used
space for the OMAP directory. The default.rgw.buckets.index pool is
replicated with 3x copies of the data.

I created a new crush ruleset to only use a few well known SSDs, and
the OMAP directory size changed as expected: if I set the OSD as out
and them tell to compact, the size of the OMAP will shrink. If I set
the OSD as in the OMAP will grow to its previous state. And while the
backfill is going we get loads of key recoveries.

Total physical space for OMAP in the OSDs that have them is ~1TB, so
given a 3x replica ~330G before replication.

The data size for the default.rgw.buckets.data is just under 300G.
There is one bucket who has ~1.7M objects and 22 shards.

After deleting that bucket the size of the database didn't change -
even after running gc process and telling the OSD to compact its
database.

This is not happening in older clusters, i.e created with hammer.
Could this be a bug?

I looked at getting all the OMAP keys and sizes
(https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add
up to close the value I expected them to take, looking at the physical
storage.

Any ideas where to look next?

thanks for all the help.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux