On Thu, Feb 19, 2009 at 10:06:17PM +1030, Rusty Russell wrote: > On Thursday 19 February 2009 10:01:42 Simon Horman wrote: > > On Wed, Feb 18, 2009 at 10:08:00PM +1030, Rusty Russell wrote: > > > > > > 2) Direct NIC attachment This is particularly interesting with SR-IOV or > > > other multiqueue nics, but for boutique cases or benchmarks, could be for > > > normal NICs. So far I have some very sketched-out patches: for the > > > attached nic dev_alloc_skb() gets an skb from the guest (which supplies > > > them via some kind of AIO interface), and a branch in netif_receive_skb() > > > which returned it to the guest. This bypasses all firewalling in the > > > host though; we're basically having the guest process drive the NIC > > > directly. > > > > Hi Rusty, > > > > Can I clarify that the idea with utilising SR-IOV would be to assign > > virtual functions to guests? That is, something conceptually similar to > > PCI pass-through in Xen (although I'm not sure that anyone has virtual > > function pass-through working yet). > > Not quite: I think PCI passthrough IMHO is the *wrong* way to do it: it > makes migrate complicated (if not impossible), and requires emulation or > the same NIC on the destination host. > > This would be the *host* seeing the virtual functions as multiple NICs, > then the ability to attach a given NIC directly to a process. > > This isn't guest-visible: the kvm process is configured to connect > directly to a NIC, rather than (say) bridging through the host. Hi Rusty, Hi Chris, Thanks for the clarification. I think that the approach that Xen recommends for migration is to use a bonding device that accesses the pass-through device if present and a virtual nic. The idea that you outline above does sound somewhat cleaner :-) > > If so, wouldn't this also be useful on machines that have multiple > > NICs? > > Yes, but mainly as a benchmark hack AFAICT :) Ok, I was under the impression that at least in the Xen world it was something people actually used. But I could easily be mistaken. > Hope that clarifies, Rusty. On Thu, Feb 19, 2009 at 03:37:52AM -0800, Chris Wright wrote: > * Simon Horman (horms@xxxxxxxxxxxx) wrote: > > On Wed, Feb 18, 2009 at 10:08:00PM +1030, Rusty Russell wrote: > > > 2) Direct NIC attachment This is particularly interesting with SR-IOV or > > > other multiqueue nics, but for boutique cases or benchmarks, could be for > > > normal NICs. So far I have some very sketched-out patches: for the > > > attached nic dev_alloc_skb() gets an skb from the guest (which supplies > > > them via some kind of AIO interface), and a branch in netif_receive_skb() > > > which returned it to the guest. This bypasses all firewalling in the > > > host though; we're basically having the guest process drive the NIC > > > directly. > > > > Can I clarify that the idea with utilising SR-IOV would be to assign > > virtual functions to guests? That is, something conceptually similar to > > PCI pass-through in Xen (although I'm not sure that anyone has virtual > > function pass-through working yet). If so, wouldn't this also be useful > > on machines that have multiple NICs? > > This would be the typical usecase for sr-iov. But I think Rusty is > referring to giving a nic "directly" to a guest but the guest is still > seeing a virtio nic (not pass-through/device-assignment). So there's > no bridge, and zero copy so the dma buffers are supplied by guest, > but host has the driver for the physical nic or the VF. -- Simon Horman VA Linux Systems Japan K.K., Sydney, Australia Satellite Office H: www.vergenet.net/~horms/ W: www.valinux.co.jp/en -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html