Thanks for the suggestions, I will try this. /Z On Fri, 7 Oct 2022 at 18:13, Konstantin Shalygin <k0ste@xxxxxxxx> wrote: > Zakhar, try to look to top of slow ops in daemon socket for this osd, you > may find 'snapc' operations, for example. By rbd head you can find rbd > image, and then try to look how much snapshots in chain for this image. > More than 10 snaps for one image can increase client ops latency to tens > milliseconds even for NVMe drives, that usually operates at usec or 1-2msec > > > k > Sent from my iPhone > > > On 7 Oct 2022, at 14:35, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote: > > > > The drive doesn't show increased utilization on average, but it does > > sporadically get more I/O than other drives, usually in short bursts. I > am > > now trying to find a way to trace this to a specific PG, pool and object > > (s) – not sure if that is possible. > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx