> -----Original Message----- > From: Stephen Hemminger [mailto:shemminger@xxxxxxxxxx] > Sent: Monday, July 23, 2012 11:36 AM > To: Chris Friesen > Cc: Don Dutile; Ben Hutchings; David Miller; yuvalmin@xxxxxxxxxxxx; Rose, > Gregory V; netdev@xxxxxxxxxxxxxxx; linux-pci@xxxxxxxxxxxxxxx > Subject: Re: New commands to configure IOV features > > On Mon, 23 Jul 2012 09:09:38 -0600 > Chris Friesen <chris.friesen@xxxxxxxxxxx> wrote: > > > On 07/23/2012 08:03 AM, Don Dutile wrote: > > > On 07/20/2012 07:42 PM, Chris Friesen wrote: > > >> > > >> I actually have a use-case where the guest needs to be able to > > >> modify the MAC addresses of network devices that are actually VFs. > > >> > > >> The guest is bonding the network devices together, so the bonding > > >> driver in the guest expects to be able to set all the slaves to the > > >> same MAC address. > > >> > > >> As I read the ixgbe driver, this should be possible as long as the > > >> host hasn't explicitly set the MAC address of the VF. Is that > correct? > > >> > > >> Chris > > > > > > Interesting tug of war: hypervisors will want to set the macaddrs > > > for security reasons, > > > some guests may want to set macaddr for > > > (valid?) config reasons. > > > > > > > In our case we have control over both guest an host anyways, so it's > > less of a security issue. In the general case though I could see it > > being an interesting problem. > > > > Back to the original discussion though--has anyone got any ideas about > > the best way to trigger runtime creation of VFs? I don't know what > > the binary APIs looks like, but via sysfs I could see something like > > > > echo number_of_new_vfs_to_create > > > /sys/bus/pci/devices/<address>/create_vfs > > > > Something else that occurred to me--is there buy-in from driver > > maintainers? I know the Intel ethernet drivers (what I'm most > > familiar > > with) would need to be substantially modified to support on-the-fly > > addition of new vfs. Currently they assume that the number of vfs is > > known at module init time. > > > > Why couldn't rtnl_link_ops be used for this. It is already the preferred > interface to create vlan's, bond devices, and other virtual devices? > The one issue is that do the created VF's exist in kernel as devices or > only visible to guest? I would say that rtnl_link_ops are network oriented and not appropriate for something like a storage controller or graphics device, which are two other common SR-IOV capable devices. I think it should be oriented toward the PCIe interface and subsystems in the kernel. - Greg -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html