Re: High IO utilization for bstore_kv_sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Most likely you are seeing time spent waiting on fdatsync in bstore_kv_sync if the drives you are using don't have power loss protection and can't perform flushes quickly.  Some consumer grade drives are actually slower at this than HDDs.


Mark


On 2/22/24 11:04, Work Ceph wrote:
Hello guys,
We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of
IO utilization for bstore_kv_sync thread during processes such as adding a
new pool and increasing/reducing the number of PGs in a pool.

It is funny though that the IO utilization (reported with IOTOP) is 99.99%,
but the reading for R/W speeds are slow. The devices where we are seeing
these issues are all SSDs systems. We are not using high end SSDs devices
though.

Have you guys seen such behavior?

Also, do you guys have any clues on why the IO utilization would be high,
when there is such a small amount of data being read and written to the
OSD/disks?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Best Regards,
Mark Nelson
Head of Research and Development

Clyso GmbH
p: +49 89 21552391 12 | a: Minnesota, USA
w: https://clyso.com | e: mark.nelson@xxxxxxxxx

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux