Re: OSD read latency grows over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

Do you make use of a separate db partition as well? And if so, where is
> it stored?
>

No, only WAL partition is on separate NVME partition. Not sure if
ceph-ansible could install Ceph with db partition on separate device on
v17.6.2

Do you only see latency increase in reads? And not writes?
>
Exactly, I see it on read only. Write latency looks pretty constant.

Not sure what metrics you are looking at, but remember that some metrics
> are "long running averages" (from the start of the daemon). If you
> restart the daemon it might look like things dramatically changed, while
> in real life this does need to be so.
>

Metrics are standard Ceph metrics: "ceph_osd_op_r_latency_sum" and
"ceph_osd_op_r_latency_count" and graph shows "rate(latency_sum[1m]) /
rate(latency_count[1m])" value. The thing is that not only metrics are
growing, but also response time for clients are growing with it. But when
we do any transformation with index pool (migration to other OSDs or
changing pg_num) - it drops and start growing again.

I've not catched yet what is causing this, but I see, that sometimes
latency drops without any manual intervention and start raising again (like
on this graph https://postimg.cc/p9ys3yX5).
--
Thank you,
Roman
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux