Sorry for the late response on this, but life has been really busy over the holidays.
On Thu, Nov 29, 2018 at 6:15 PM Tomasz Płaza <tomasz.plaza@xxxxxxxxxx> wrote:
Hi,
I have a ceph 12.2.8 cluster on filestore with rather large omap dirs
(avg size is about 150G). Recently slow requests became a problem, so
after some digging I decided to convert omap from leveldb to rocksdb.
Conversion went fine and slow requests rate went down to acceptable
level. Unfortunately conversion did not shrink most of omap dirs, so I
tried online compaction:
Before compaction: 50G /var/lib/ceph/osd/ceph-0/current/omap/
After compaction: 100G /var/lib/ceph/osd/ceph-0/current/omap/
Purge and recreate: 1.5G /var/lib/ceph/osd/ceph-0/current/omap/
Before compaction: 135G /var/lib/ceph/osd/ceph-5/current/omap/
After compaction: 260G /var/lib/ceph/osd/ceph-5/current/omap/
Purge and recreate: 2.5G /var/lib/ceph/osd/ceph-5/current/omap/
For me compaction which makes omap bigger is quite weird and
frustrating. Please help.
P.S. My cluster suffered from ongoing index reshards (it is disabled
now) and on many buckets with 4m+ objects I have a lot of old indexes:
634 bucket1
651 bucket2
...
1231 bucket17
1363 bucket18
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com