Re: OSD read latency grows over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, we changed osd_memory_target to 10 GB on just our index OSDs. These OSDs have over 300 GB of lz4 compressed bucket index omap data. Here is a graph showing the latencies before/after that single change:

https://pasteboard.co/IMCUWa1t3Uau.png

Cory Snyder


From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Sent: Friday, February 2, 2024 2:15 PM
To: Cory Snyder <csnyder@xxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject: Re:  OSD read latency grows over time 
 
You adjusted osd_memory_target? Higher than the default 4GB? Another thing that we've found is that rocksdb can become quite slow if it doesn't have enough memory for internal caches. As our cluster usage has grown, we've needed to increase 
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender 
This message came from outside your organization. 
Report Suspicious 
 
ZjQcmQRYFpfptBannerEnd
You adjusted osd_memory_target?  Higher than the default 4GB?



Another thing that we've found is that rocksdb can become quite slow if it doesn't have enough memory for internal caches. As our cluster usage has grown, we've needed to increase OSD memory in accordance with bucket index pool usage. One one cluster, we found that increasing OSD memory improved rocksdb latencies by over 10x.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux