Re: OSD read latency grows over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/26/24 11:26, Roman Pashin wrote:

Unfortunately they cannot. You'll want to set them in centralized conf
and then restart OSDs for them to take effect.

Got it. Thank you Josh! WIll put it to config of affected OSDs and restart
them.

Just curious, can decreasing rocksdb_cf_compact_on_deletion_trigger 16384 >
4096 hurt performance of HDD OSDs in any way? I have no growing latency on
HDD OSD, where data is stored, but it would be easier to set it to [osd]
section without cherry picking only SSD/NVME OSDs, but for all at once.


Potentially if you set the trigger too low, you could force constant compactions.  Say if you set it to trigger compaction every time a tombstone is encountered.  You really want to find the sweet spot where iterating over tombstones (possibly multiple times) is more expensive than doing a compaction.  The defaults are basically just tuned to avoid the worst case scenario where OSDs become laggy or even go into heartbeat timeout (and we're not 100% sure we got those right).  I believe we've got a couple of big users that tune it more aggressively, though I'll let them speak up if they are able.


Mark


--
Thank you,
Roman
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux