On Wed, Feb 28, 2018 at 04:11:31PM +0100, Jiri Pirko wrote: > Wed, Feb 28, 2018 at 03:32:44PM CET, mst@xxxxxxxxxx wrote: > >On Wed, Feb 28, 2018 at 08:08:39AM +0100, Jiri Pirko wrote: > >> Tue, Feb 27, 2018 at 10:41:49PM CET, kubakici@xxxxx wrote: > >> >On Tue, 27 Feb 2018 13:16:21 -0800, Alexander Duyck wrote: > >> >> Basically we need some sort of PCI or PCIe topology mapping for the > >> >> devices that can be translated into something we can communicate over > >> >> the communication channel. > >> > > >> >Hm. This is probably a completely stupid idea, but if we need to > >> >start marshalling configuration requests/hints maybe the entire problem > >> >could be solved by opening a netlink socket from hypervisor? Even make > >> >teamd run on the hypervisor side... > >> > >> Interesting. That would be more trickier then just to fwd 1 genetlink > >> socket to the hypervisor. > >> > >> Also, I think that the solution should handle multiple guest oses. What > >> I'm thinking about is some generic bonding description passed over some > >> communication channel into vm. The vm either use it for configuration, > >> or ignores it if it is not smart enough/updated enough. > > > >For sure, we could build virtio-bond to pass that info to guests. > > What do you mean by "virtio-bond". virtio_net extension? I mean a new device supplying topology information to guests, with updates whenever VMs are started, stopped or migrated. > > > >Such an advisory mechanism would not be a replacement for the mandatory > >passthrough fallback flag proposed, but OTOH it's much more flexible. > > > >-- > >MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization