On 24 April 2015 at 11:47, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote:
My concern is the overhead of the vhost_net component copying
descriptors between NICs.
I see. So you would not have to reserve CPU resources for vswitches. Instead you would give all cores to the VMs and they would pay for their own networking. This would be especially appealing in the extreme case where all networking is "Layer 1" connectivity between local virtual machines.
This would make VM<->VM links different to VM<->network links. I suppose that when you created VMs you would need to be conscious of whether or not you are placing them on the same host or NUMA node so that you can predict what network performance will be available.
For what it is worth, I think this would make life more difficult for network operators hosting DPDK-style network applications ("NFV"). Virtio-net would become a more complex abstraction, the orchestration systems would need to take this into account, and there would be more opportunity for interoperability problems between virtual machines.
The simpler alternative that I prefer is to provide network operators with a Virtio-net abstraction that behaves and performs in exactly the same way for all kinds of network traffic -- whether or not the VMs are on the same machine and NUMA node.
That would be more in line with SR-IOV behavior which seems to me like the other horse in this race. Perhaps my world view here is too narrow though and other technologies like ivshmem are more relevant than I give them credit for?
_______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization