This patch set adds support for native vectored AIO ops. Instead of using our own thread pool for queueing io ops, we use the kernel. Numbers - short story: Overall increase of at least 10%, with sequential ops benefiting the most. Numbers - long story: Before: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 2 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 512 1G 550 97 26638 7 25050 4 2942 91 339188 35 1711 68 Latency 40602us 12269ms 2755ms 27333us 119ms 77816us Version 1.96 ------Sequential Create------ --------Random Create-------- 512 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 12172 22 +++++ +++ 23771 34 20209 36 +++++ +++ 22181 33 Latency 153ms 566us 571us 204us 56us 624us 1.96,1.96,512,2,1319996851,1G,,550,97,26638,7,25050,4,2942,91,339188,35,1711,68,16,,,,,12172,22,+++++,+++,23771,34,20209,36,+++++,+++,22181,33,40602us,12269ms,2755ms,27333us,119ms,77816us,153ms,566us,571us,204us,56us,624us After: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 2 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP tux 512M 565 96 49307 19 33464 6 3270 99 923539 57 2599 64 Latency 15285us 240ms 2496ms 7478us 5586us 2878ms Version 1.96 ------Sequential Create------ --------Random Create-------- tux -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 26992 48 +++++ +++ 27018 38 25618 45 +++++ +++ 24272 34 Latency 1145us 1311us 1708us 408us 186us 283us 1.96,1.96,tux,2,1319997274,512M,,565,96,49307,19,33464,6,3270,99,923539,57,2599,64,16,,,,,26992,48,+++++,+++,27018,38,25618,45,+++++,+++,24272,34,15285us,240ms,2496ms,7478us,5586us,2878ms,1145us,1311us,1708us,408us,186us,283us I'd be happy if someone could run more benchmarks, as these ones looks very optimistic, but I kept getting similar results over and over. Sasha Levin (11): kvm tools: Switch to using an enum for disk image types kvm tools: Hold a copy of ops struct inside disk_image kvm tools: Remove the non-iov interface from disk image ops kvm tools: Modify disk ops usage kvm tools: Modify behaviour on missing ops ptr kvm tools: Add optional callback on disk op completion kvm tools: Remove qcow nowrite function kvm tools: Split io request from completion kvm tools: Hook virtio-blk completion to disk op completion kvm tools: Add aio read write functions kvm tools: Use native vectored AIO in virtio-blk tools/kvm/Makefile | 1 + tools/kvm/disk/core.c | 116 +++++++++++++++++++--------------- tools/kvm/disk/qcow.c | 58 ++++++++++++++--- tools/kvm/disk/raw.c | 87 +++++++++++++++++-------- tools/kvm/include/kvm/disk-image.h | 50 ++++++++------- tools/kvm/include/kvm/read-write.h | 6 ++ tools/kvm/include/kvm/virtio-blk.h | 1 + tools/kvm/read-write.c | 24 +++++++ tools/kvm/virtio/blk.c | 122 ++++++++++++++++++++++++------------ 9 files changed, 312 insertions(+), 153 deletions(-) -- 1.7.7.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html