extreme ceph-osd cpu load for rand. 4k write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello list,

whiling benchmarking i was wondering, why the ceph-osd load is so extreme high while having random 4k write i/o.

Here an example while benchmarking:

random 4k write: 16.000 iop/s 180% CPU Load in top from EACH ceph-osd process

random 4k read: 16.000 iop/s 19% CPU Load in top from EACH ceph-osd process

seq 4M write: 800MB/s 14% CPU Load in top from EACH ceph-osd process

seq 4M read: 1600MB/s 9% CPU Load in top from EACH ceph-osd process

I can't understand why in this single case the load is so EXTREMELY high.

Greets
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux