Re: qemu-img convert vs rbd import performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's already in qemu 2.9

http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d


"
This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults to 8). And the -W parameter to
allow qemu-img to write to the target out of order rather than sequential. This improves
performance as the writes do not have to wait for each other to complete.
"

And performance was dramatically increase!

Runed it with Luminous and qemu 2.9.0 (this is host with qemu-img, network bandwith with ceph cluster):

http://storage6.static.itmages.ru/i/17/1223/h_1514004003_2271300_d3ee031fda.png

From 11:05 to 11:28: 35% of  100Gb. Started googling about news in qemu, founded this message. Append -m 16 -W. Network iface utilisation was raises from ~150Mbit/s to ~2500Mbit/s (this is convert from one rbd pool to another).



k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux