You aren't showing the I/O size, only the latency. It looks like this is mostly sequential writes, since it's merging most of the I/O. Because you only assigned 1 volume (sdb), you will be limited to a single queue. I'd recommend adding more volumes and striping the data across.
On Sat, Aug 24, 2019, 11:00 AM linghucongsong <linghucongsong@xxxxxxx> wrote:
HI all!I use ceph as the openstack VM disk. I have a VM run postgresql.I found the disk on the vm run postgresql is very busy and slow!But the ceph cluster is very healthy and without any slow request.Even the vm disk is very busy, but the ceph cluster is look like very idle.My ceph version is 12.2.8 and the vm disk is ext4 file system.The postgresql vm disk is very busy! look below:avg-cpu: %user %nice %system %iowait %steal %idle0.53 0.00 1.6 16.55 0.00 81.31Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %utilvdb 0.00 7425.00 2.0 65.5 40.00 63904.00 940.54 134.27 66966.54 12553.40 69042.94 14.71 100.05The ceph cluster is very idle.osd commit_latency(ms) apply_latency(ms)39 0 138 0 237 0 036 0 135 0 034 0 033 0 032 0 031 0 030 0 129 0 128 0 027 0 126 0 125 0 124 0 123 0 022 0 09 0 08 0 07 0 06 0 15 0 14 0 70 0 11 0 32 0 23 0 110 0 111 0 013 0 014 0 015 0 116 0 017 0 118 0 019 0 020 0 021 0 0Can anybody tell me why?Thanks in advance!_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx