On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote: > On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan <namei.unix@xxxxxxxxx> wrote: > > Did you investigate userspace virtio-blk performance? If so, what > issues did you find? > > I have a hacked up world here that basically implements vhost-blk in userspace: > http://repo.or.cz/w/qemu/stefanha.git/blob/refs/heads/virtio-blk-data-plane:/hw/virtio-blk.c > > * A dedicated virtqueue thread sleeps on ioeventfd > * Guest memory is pre-mapped and accessed directly (not using QEMU's > usually memory access functions) > * Linux AIO is used, the QEMU block layer is bypassed > * Completion interrupts are injected from the virtqueue thread using ioctl > > I will try to rebase onto qemu-kvm.git/master (this work is several > months old). Then we can compare to see how much of the benefit can > be gotten in userspace. Here is the rebased virtio-blk-data-plane tree: http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/virtio-blk-data-plane When I run it on my laptop with an Intel X-25M G2 SSD I see a latency reduction compared to mainline userspace virtio-blk. I'm not posting results because I did quick fio runs without ensuring a quiet benchmarking environment. There are a couple of things that could be modified: * I/O request merging is done to mimic bdrv_aio_multiwrite() - but vhost-blk does not do this. Try turning it off? * epoll(2) is used but perhaps select(2)/poll(2) have lower latency for this use case. Try another event mechanism. Let's see how it compares to vhost-blk first. I can tweak it if we want to investigate further. Yuan: Do you want to try the virtio-blk-data-plane tree? You don't need to change the qemu-kvm command-line options. Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html