Andi Kleen wrote: > Anthony Liguori <anthony@xxxxxxxxxxxxx> writes: > >> What we would rather do in KVM, is have the VFs appear in the host as >> standard network devices. We would then like to back our existing PV >> driver to this VF directly bypassing the host networking stack. A key >> feature here is being able to fill the VF's receive queue with guest >> memory instead of host kernel memory so that you can get zero-copy >> receive traffic. This will perform just as well as doing passthrough >> (at least) and avoid all that ugliness of dealing with SR-IOV in the >> guest. >> > > But you shift a lot of ugliness into the host network stack again. > Not sure that is a good trade off. > The net effect will be positive. We will finally have aio networking from userspace (can send process memory without resorting to sendfile()), and we'll be able to assign a queue to a process (which will enable all sorts of interesting high performance things; basically VJ channels without kernel involvement). > Also it would always require context switches and I believe one > of the reasons for the PV/VF model is very low latency IO and having > heavyweight switches to the host and back would be against that. > It's true that latency would suffer (or alternatively cpu consumption would increase). -- error compiling committee.c: too many arguments to function _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization