Re: fio read and randread cpu usage results for qemu and host machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have redone the perf reports with tree and missing debug symbols.


Here my analysis:



fio + aioengine + krbd : 35000iops randread 4K
-----------------------------------------------

TOP
---
87,1% idle  : 104% cpu total usage

 2132 root      20   0 97740 4000 3612 S  47,9  0,0   0:14.50fio                                                                                                                                                                             
 2134 root      20   0     0    0    0 S  42,6  0,0   0:12.8 kworker/0:2                                                                                                                                                                     
48624 root      20   0     0    0    0 S   6,3  0,0   0:54.9 kworker/2:1                                                                                                                                                                     
48396 root      20   0     0    0    0 S   5,3  0,0   0:13.14kworker/4:0                                                                                                                                                                     
    3 root      20   0     0    0    0 S   4,0  0,0   2:34.53ksoftirqd/0                                                                                                                                                                     
48387 root      20   0     0    0    0 S   1,3  0,0   0:07.82kworker/6:1                                                                                                                                                                     
 2130 root      20   0 67788  38m  38m S   0,3  0,1   0:00.09       


perf
----

+  24,23%    kworker/0:2  [kernel.kallsyms]
+  21,47%        swapper  [kernel.kallsyms]
+  21,10%            fio  [kernel.kallsyms]
+   5,37%            fio  fio
+   5,36%            fio  [libceph]
+   4,93%    kworker/2:1  [kernel.kallsyms]
+   4,89%    kworker/0:2  [libceph]
+   2,08%            fio  [rbd]
+   1,80%    kworker/0:2  [rbd]
+   1,69%        swapper  [tg3]
+   0,78%    kworker/0:2  [tg3]
+   0,74%    kworker/4:0  [kernel.kallsyms]
+   0,73%    ksoftirqd/0  [kernel.kallsyms]
+   0,66%    kworker/2:1  [libceph]
+   0,60%    kworker/3:1  [kernel.kallsyms]
...


I think that  kworker/0:2 is main rbd kernel management : 42% cpi


FIO + RBD ENGINE : 35000iops randread 4K
------------------------------------------

TOP
---
28%idle : 576% cpu total usage

 1231 root      20   0  922m  38m  35m S 576,1  0,1   1:28.96 fio                                                                                                                                                                             
    3 root      20   0     0    0    0 S   1,3  0,0   2:27.24 ksoftirqd/0   

perf
----
+  25,00%          fio  [kernel.kallsyms]
+  24,84%          fio  libc-2.13.so  -----> malloc,free,... from fio rbdengine
+  16,92%          fio  librados.so.2.0.0
+  12,68%      swapper  [kernel.kallsyms]
+   9,87%          fio  librbd.so.1.0.0
+   4,64%          fio  libpthread-2.13.so
+   2,33%          fio  libstdc++.so.6.0.17
+   1,88%          fio  fio

librados+ librd = 26,79%  of 576% = 154% cpu.

So, seem that librbd+librados use 3x more cpu than krbd. Is it normal ????



For fio-rbd engine, seem that it's missing a lot of optimisations.
malloc,free take around 25% of 576% : 144%cpu
other seem to be related to fio code too.

Alexandre


Attachment: krbd-reportree.txt.gz
Description: GNU Zip compressed data

Attachment: rbd-reporttree.txt.gz
Description: GNU Zip compressed data


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux