Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andi Kleen wrote:
Anthony Liguori <anthony@xxxxxxxxxxxxx> writes:
What we would rather do in KVM, is have the VFs appear in the host as
standard network devices.  We would then like to back our existing PV
driver to this VF directly bypassing the host networking stack.  A key
feature here is being able to fill the VF's receive queue with guest
memory instead of host kernel memory so that you can get zero-copy
receive traffic.  This will perform just as well as doing passthrough
(at least) and avoid all that ugliness of dealing with SR-IOV in the
guest.

But you shift a lot of ugliness into the host network stack again.
Not sure that is a good trade off.

Also it would always require context switches and I believe one
of the reasons for the PV/VF model is very low latency IO and having
heavyweight switches to the host and back would be against that.

I don't think it's established that PV/VF will have less latency than using virtio-net. virtio-net requires a world switch to send a group of packets. The cost of this (if it stays in kernel) is only a few thousand cycles on the most modern processors.

Using VT-d means that for every DMA fetch that misses in the IOTLB, you potentially have to do four memory fetches to main memory. There will be additional packet latency using VT-d compared to native, it's just not known how much at this time.

Regards,

Anthony Liguori


-Andi


--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux