RE: Some more basic questions..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>>A few additional questions:)
>>>>
>>>>1. If IO to a block device goes through QEMU and not vhost, are there
>>>>data copies between kernel and user mode if I do IO to a block device
>>>>or is it zero copy? Kind of related to Question (2) also.
>>>>
>>>An additional copy will be avoided only by using vhost, so if you are using vhost you can call it zero copy realtively.
>>I know that kvm support network tx zero-copy when using vhost, but rx copy is still performed in vhost, because the NIC cannot determine
>>DMA to which VM's rx buffers before L2 switching(unless vhost using page-flip between HVA->HPA and GPA->HPA, or macvtap over SRIOV-VF is used).
>>Storage has no this limitation, can vhost-blk and vhost-scsi avoid data copy in both write and read? 
>>
>>IIUC, even if vhost is not used, qemu using linux native aio can avoid data copy between user and kernel space, right?
>>
>A copy in kernel will be done irrespective of Rx/Tx when vhost is used. One copy between user space and kernel space in 
>QEMU is avoided when vhost is used. That is why using vhost is "realtively" zero copy.
>
I agree on that one copy between user space and kernel space in QEMU is avoided when vhost is used.
But when vhost is used, the data copy in kernel also can be removed during tx, not only the data copy between user space and kernel space,
which is implemented in tun.c, please see below commit,
https://git.kernel.org/cgit/virt/kvm/kvm.git/commit/drivers/net/tun.c?id=0690899b4d4501b3505be069b9a687e68ccbe15b
it is called tx zero-copy, just like that done in xen-netback.

So, the zero-copy I mentioned above is that disk hardware directly DMA from vm memory to disk for write, 
or from disk to vm memory for read.
Just like network tx zero-copy implemented in tun.c.

>I don’t think that they can be classified as vhost-blk or vhost-scsi. Vhost is an add on to an existing Guest driver, 
>like virtio. So they can be called virtio-blk, virtio-scsi
>
Sorry for my poor English, 
vhost-blk is vs virtio-blk backend implemented in qemu, moving the processing from qemu to kernel,
vhost-scsi is vs virtio-scci backend implemented in qemu, moving the processing from qemu to kernel.

>So, using PV drivers(virtio-blk) for disk access makes accesses faster and using vhost on top of this will make it much 
>more faster as one copy will be avoided.
>
>Linus AIO will facilitate avoiding blocking on a i/o, but doubt if it has anything to do with copy across user space and kernel space.
I'm not sure about that if linux native aio can avoid the data copy between user space and kernel space.
The kernel get the physical address of user-process virtual address by get_user_pages/use_mm, and then pass it to harddisk for DMA?
��.n��������+%������w��{.n�����o�^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux