Re: OSD read latency grows over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1024 PGs on NVMe.

From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Sent: Friday, February 2, 2024 2:37 PM
To: Cory Snyder <csnyder@xxxxxxxxxxxxxxx>
Subject: Re:  OSD read latency grows over time 
 
Thanks. What type of media are your index OSDs? How many PGs? > On Feb 2, 2024, at 2: 32 PM, Cory Snyder <csnyder@ 1111systems. com> wrote: > > Yes, we changed osd_memory_target to 10 GB on just our index OSDs. These OSDs have over 
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender 
This message came from outside your organization. 
Report Suspicious 
 
ZjQcmQRYFpfptBannerEnd
Thanks.  What type of media are your index OSDs? How many PGs?

> On Feb 2, 2024, at 2:32 PM, Cory Snyder <csnyder@xxxxxxxxxxxxxxx> wrote:
> 
> Yes, we changed osd_memory_target to 10 GB on just our index OSDs. These OSDs have over 300 GB of lz4 compressed bucket index omap data. Here is a graph showing the latencies before/after that single change:
> 
> https://urldefense.com/v3/__https://pasteboard.co/IMCUWa1t3Uau.png__;!!J0dtj8f0ZRU!l4XNLVA0N9y347MkNZ_gcnzLaYG9S6nLx_nGR0bzUw6SlThh6f8gvXzqzRUOMnLOMVpnNFDi9OQ9TqWsJN8gDPN11WfU$
> 
> Cory Snyder
> 
> 
> From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
> Sent: Friday, February 2, 2024 2:15 PM
> To: Cory Snyder <csnyder@xxxxxxxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxx>
> Subject: Re:  OSD read latency grows over time 
>  
> You adjusted osd_memory_target? Higher than the default 4GB? Another thing that we've found is that rocksdb can become quite slow if it doesn't have enough memory for internal caches. As our cluster usage has grown, we've needed to increase 
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender 
> This message came from outside your organization. 
> Report Suspicious 
>  
> ZjQcmQRYFpfptBannerEnd
> You adjusted osd_memory_target?  Higher than the default 4GB?
> 
> 
> 
> Another thing that we've found is that rocksdb can become quite slow if it doesn't have enough memory for internal caches. As our cluster usage has grown, we've needed to increase OSD memory in accordance with bucket index pool usage. One one cluster, we found that increasing OSD memory improved rocksdb latencies by over 10x.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux