Does ceph has impact on imp IO performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,All

while I use ceph as virtual machine backend, and execute imp operation, IO 
performance is 1/10 as in physical machine,about 600kb/s

but execute dd for IO performance test,such as 
dd /dev/zero bs=64k count=1000000 of=/1.file  average IO speed is about 
50m/s


here is physical machine result,while excute imp operation

top - 07:19:50 up 4 days,  1:29,  5 users,  load average: 2.21, 4.66, 4.14
Tasks: 142 total,   1 running, 141 sleeping,   0 stopped,   0 zombie
Cpu(s):  3.5%us,  3.5%sy,  0.0%ni, 48.6%id, 44.3%wa,  0.1%hi,  0.0%si, 
0.0%st
Mem:   1956020k total,  1939816k used,    16204k free,   118032k buffers
Swap:  4192956k total,   351660k used,  3841296k free,  1239960k cached

 


iostat
evice:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda              13.00  1561.00   89.00   88.00  7224.00  6456.00   154.58 
    7.46   40.70   5.65 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          17.16    0.00    4.90   44.61    0.00   33.33

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               4.00  1762.00   43.00   84.00  5592.00  7216.00   201.70 
    4.26   33.83   7.84  99.60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          13.46    0.00    3.37   32.21    0.00   50.96

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda              11.00  1198.00   42.00  106.00  3868.00  5076.00   120.86 
    2.95   21.51   6.76 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          16.08    0.00    8.54   28.14    0.00   47.24

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0.00  2651.00   31.00  148.00   124.00 10956.00   123.80 
    2.56   14.37   5.50  98.40 


here is virtual machine result while excute imp operation

when is begin, IO speed is same as physical machine

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00 17872.00    1.00  139.00     4.00 62720.00   896.06 
  187.16 1911.89   7.14 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.82    0.00    8.45   64.31    0.00   26.43

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00  7722.00    1.00   43.00     4.00 16200.00   736.55 
  172.50 1380.36  22.73 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.72   76.57    0.00   22.71

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00 10132.00    0.00  123.00     0.00 55960.00   909.92 
  189.00 2288.46   8.13 100.00

but after memory usage rise, IO speed down as 1/10 of phyiscal machine



avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.62    0.62    0.00   98.76

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00    86.14    0.00  160.40     0.00   665.35     8.30 
    1.04    6.25   6.17  99.01

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.23    0.00    0.23    0.47    0.00   99.06

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00    94.00    0.00  188.00     0.00   752.00     8.00 
    1.00    5.57   5.34 100.40

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.50    0.99    0.00   98.51

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda               0.00    86.00    0.00  172.00     0.00   688.00     8.00 
    0.99    5.65   5.77  99.20


Thanks in advance
Michael

--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140508/07315eb7/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 12541 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140508/07315eb7/attachment.gif>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux