Re: OSD read latency grows over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Mark,

In v17.2.7 we enabled a feature that automatically performs a compaction
>> if too many tombstones are present during iteration in RocksDB.  It
>> might be worth upgrading to see if it helps (you might have to try
>> tweaking the settings if the defaults aren't helping enough).  The PR is
>> here:
we've upgraded Ceph to v17.2.7 yesterday. Unfortunately I still see growing
latency on OSDs hosting index pool. Will try to tune
rocksdb_cf_compact_on_deletion options as you suggested.

I've started with decreasing deletion_trigger from 16384 to 512 with:

# ceph tell 'osd.*' injectargs '--rocksdb_cf_compact_on_deletion_trigger

At first glance - nothing has changed per OSD latency graphs. I've tried to
decrease it to 32 deletions per window on a single OSD where I see
increasing latency to force compactions, but per graphs nothing has changed
after approx 40 minutes.

# ceph tell 'osd.435' injectargs '--rocksdb_cf_compact_on_deletion_trigger

Didn't touch rocksdb_cf_compact_on_deletion_sliding_window yet, it is set
with default 32768 entries.

Do you know if it rocksdb_cf_compact_on_deletion_trigger and
rocksdb_cf_compact_on_deletion_sliding_window can be changed in runtime
without OSD restart?

Thank you,
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux