Re: 16.2.10: ceph osd perf always shows high latency for a specific OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the suggestions, I will try this.

/Z

On Fri, 7 Oct 2022 at 18:13, Konstantin Shalygin <k0ste@xxxxxxxx> wrote:

> Zakhar, try to look to top of slow ops in daemon socket for this osd, you
> may find 'snapc' operations, for example. By rbd head you can find rbd
> image, and then try to look how much snapshots in chain for this image.
> More than 10 snaps for one image can increase client ops latency to tens
> milliseconds even for NVMe drives, that usually operates at usec or 1-2msec
>
>
> k
> Sent from my iPhone
>
> > On 7 Oct 2022, at 14:35, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote:
> >
> > The drive doesn't show increased utilization on average, but it does
> > sporadically get more I/O than other drives, usually in short bursts. I
> am
> > now trying to find a way to trace this to a specific PG, pool and object
> > (s) – not sure if that is possible.
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux