Tue, Feb 20, 2018 at 05:04:29PM CET, alexander.duyck@xxxxxxxxx wrote: >On Tue, Feb 20, 2018 at 2:42 AM, Jiri Pirko <jiri@xxxxxxxxxxx> wrote: >> Fri, Feb 16, 2018 at 07:11:19PM CET, sridhar.samudrala@xxxxxxxxx wrote: >>>Patch 1 introduces a new feature bit VIRTIO_NET_F_BACKUP that can be >>>used by hypervisor to indicate that virtio_net interface should act as >>>a backup for another device with the same MAC address. >>> >>>Ppatch 2 is in response to the community request for a 3 netdev >>>solution. However, it creates some issues we'll get into in a moment. >>>It extends virtio_net to use alternate datapath when available and >>>registered. When BACKUP feature is enabled, virtio_net driver creates >>>an additional 'bypass' netdev that acts as a master device and controls >>>2 slave devices. The original virtio_net netdev is registered as >>>'backup' netdev and a passthru/vf device with the same MAC gets >>>registered as 'active' netdev. Both 'bypass' and 'backup' netdevs are >>>associated with the same 'pci' device. The user accesses the network >>>interface via 'bypass' netdev. The 'bypass' netdev chooses 'active' netdev >>>as default for transmits when it is available with link up and running. >> >> Sorry, but this is ridiculous. You are apparently re-implemeting part >> of bonding driver as a part of NIC driver. Bond and team drivers >> are mature solutions, well tested, broadly used, with lots of issues >> resolved in the past. What you try to introduce is a weird shortcut >> that already has couple of issues as you mentioned and will certanly >> have many more. Also, I'm pretty sure that in future, someone comes up >> with ideas like multiple VFs, LACP and similar bonding things. > >The problem with the bond and team drivers is they are too large and >have too many interfaces available for configuration so as a result >they can really screw this interface up. > >Essentially this is meant to be a bond that is more-or-less managed by >the host, not the guest. We want the host to be able to configure it >and have it automatically kick in on the guest. For now we want to >avoid adding too much complexity as this is meant to be just the first >step. Trying to go in and implement the whole solution right from the >start based on existing drivers is going to be a massive time sink and >will likely never get completed due to the fact that there is always >going to be some other thing that will interfere. > >My personal hope is that we can look at doing a virtio-bond sort of >device that will handle all this as well as providing a communication >channel, but that is much further down the road. For now we only have >a single bit so the goal for now is trying to keep this as simple as >possible. I have another usecase that would require the solution to be different then what you suggest. Consider following scenario: - baremetal has 2 sr-iov nics - there is a vm, has 1 VF from each nics: vf0, vf1. No virtio_net - baremetal would like to somehow tell the VM to bond vf0 and vf1 together and how this bonding should be configured, according to how the VF representors are configured on the baremetal (LACP for example) The baremetal could decide to remove any VF during the VM runtime, it can add another VF there. For migration, it can add virtio_net. The VM should be inctructed to bond all interfaces together according to how baremetal decided - as it knows better. For this we need a separate communication channel from baremetal to VM (perhaps something re-usable already exists), we need something to listen to the events coming from this channel (kernel/userspace) and to react accordingly (create bond/team, enslave, etc). Now the question is: is it possible to merge the demands you have and the generic needs I described into a single solution? From what I see, that would be quite hard/impossible. So at the end, I think that we have to end-up with 2 solutions: 1) virtio_net, netvsc in-driver bonding - very limited, stupid, 0config solution that works for all (no matter what OS you use in VM) 2) team/bond solution with assistance of preferably userspace daemon getting info from baremetal. This is not 0config, but minimal config - user just have to define this "magic bonding" should be on. This covers all possible usecases, including multiple VFs, RDMA, etc. Thoughts? _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization