High IO utilization for bstore_kv_sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello guys,
We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of
IO utilization for bstore_kv_sync thread during processes such as adding a
new pool and increasing/reducing the number of PGs in a pool.

It is funny though that the IO utilization (reported with IOTOP) is 99.99%,
but the reading for R/W speeds are slow. The devices where we are seeing
these issues are all SSDs systems. We are not using high end SSDs devices
though.

Have you guys seen such behavior?

Also, do you guys have any clues on why the IO utilization would be high,
when there is such a small amount of data being read and written to the
OSD/disks?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux