I'm going to piggy back on this somewhat. I've battled RocksDB spillovers over the course of the life of the cluster since moving to bluestore, however I have always been able to compact it well enough. But now I am stumped at getting this to compact via $ceph tell osd.$osd compact, which has always worked in the past. No matter how many times I compact it, I always spill over exactly 192KiB.
The OSD is a 1.92TB SATA SSD, the WAL/DB is a 36GB partition on NVMe. I tailed and tee'd the OSD's logs during a manual compaction here: https://pastebin.com/bcpcRGEe This is with the normal logging level. I have no idea how to make heads or tails of that log data, but maybe someone can figure out why this one OSD just refuses to compact? OSD is 14.2.9. OS is U18.04. Kernel is 4.15.0-96. I haven't played with ceph-bluestore-tool or ceph-kvstore-tool but after seeing the above mention in this thread, I do see ceph-kvstore-tool <rocksdb|bluestore-kv?> compact, which sounds like it may be the same thing that ceph tell compact does under the hood?
Also, not sure if this is helpful: osd.36 spilled over 192 KiB metadata from 'db' device (13 GiB used of 34 GiB) to slow device
You can see the breakdown between OMAP data and META data. After compacting again: osd.36 spilled over 192 KiB metadata from 'db' device (26 GiB used of 34 GiB) to slow device
So the OMAP size remained the same, while the metadata ballooned (while still conspicuously spilling over 192KiB exactly) These OSDs have a few RBD images, cephfs metadata, and librados objects (not RGW) stored. The breakdown of OMAP size is pretty widely binned, but the GiB sizes are definitely the minority. Looking at the breakdown with some simple bash-fu KiB = 147 MiB = 105 GiB = 24 To further divide that, all of the GiB sized OMAPs are SSD OSD's:
I have no idea if any of these data points are pertinent or helpful, but I want to give as clear a picture as possible to prevent chasing the wrong thread. Appreciate any help with this. Thanks, Reed
|
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx