On Tue, 2 Feb 2021 19:41:16 +0200 Max Gurtovoy <mgurtovoy@xxxxxxxxxx> wrote: > On 2/2/2021 6:06 PM, Cornelia Huck wrote: > > On Mon, 1 Feb 2021 11:42:30 -0700 > > Alex Williamson <alex.williamson@xxxxxxxxxx> wrote: > > > >> On Mon, 1 Feb 2021 12:49:12 -0500 > >> Matthew Rosato <mjrosato@xxxxxxxxxxxxx> wrote: > >> > >>> On 2/1/21 12:14 PM, Cornelia Huck wrote: > >>>> On Mon, 1 Feb 2021 16:28:27 +0000 > >>>> Max Gurtovoy <mgurtovoy@xxxxxxxxxx> wrote: > >>>> > >>>>> This patch doesn't change any logic but only align to the concept of > >>>>> vfio_pci_core extensions. Extensions that are related to a platform > >>>>> and not to a specific vendor of PCI devices should be part of the core > >>>>> driver. Extensions that are specific for PCI device vendor should go > >>>>> to a dedicated vendor vfio-pci driver. > >>>> My understanding is that igd means support for Intel graphics, i.e. a > >>>> strict subset of x86. If there are other future extensions that e.g. > >>>> only make sense for some devices found only on AMD systems, I don't > >>>> think they should all be included under the same x86 umbrella. > >>>> > >>>> Similar reasoning for nvlink, that only seems to cover support for some > >>>> GPUs under Power, and is not a general platform-specific extension IIUC. > >>>> > >>>> We can arguably do the zdev -> s390 rename (as zpci appears only on > >>>> s390, and all PCI devices will be zpci on that platform), although I'm > >>>> not sure about the benefit. > >>> As far as I can tell, there isn't any benefit for s390 it's just > >>> "re-branding" to match the platform name rather than the zdev moniker, > >>> which admittedly perhaps makes it more clear to someone outside of s390 > >>> that any PCI device on s390 is a zdev/zpci type, and thus will use this > >>> extension to vfio_pci(_core). This would still be true even if we added > >>> something later that builds atop it (e.g. a platform-specific device > >>> like ism-vfio-pci). Or for that matter, mlx5 via vfio-pci on s390x uses > >>> these zdev extensions today and would need to continue using them in a > >>> world where mlx5-vfio-pci.ko exists. > >>> > >>> I guess all that to say: if such a rename matches the 'grand scheme' of > >>> this design where we treat arch-level extensions to vfio_pci(_core) as > >>> "vfio_pci_(arch)" then I'm not particularly opposed to the rename. But > >>> by itself it's not very exciting :) > >> This all seems like the wrong direction to me. The goal here is to > >> modularize vfio-pci into a core library and derived vendor modules that > >> make use of that core library. If existing device specific extensions > >> within vfio-pci cannot be turned into vendor modules through this > >> support and are instead redefined as platform specific features of the > >> new core library, that feels like we're already admitting failure of > >> this core library to support known devices, let alone future devices. > >> > >> IGD is a specific set of devices. They happen to rely on some platform > >> specific support, whose availability should be determined via the > >> vendor module probe callback. Packing that support into an "x86" > >> component as part of the core feels not only short sighted, but also > >> avoids addressing the issues around how userspace determines an optimal > >> module to use for a device. > > Hm, it seems that not all current extensions to the vfio-pci code are > > created equal. > > > > IIUC, we have igd and nvlink, which are sets of devices that only show > > up on x86 or ppc, respectively, and may rely on some special features > > of those architectures/platforms. The important point is that you have > > a device identifier that you can match a driver against. > > maybe you can supply the ids ? > > Alexey K, I saw you've been working on the NVLINK2 for P9. can you > supply the exact ids for that should be bounded to this driver ? > > I'll add it to V3. As noted previously, if we start adding ids for vfio drivers then we create conflicts with the native host driver. We cannot register a vfio PCI driver that automatically claims devices. At best, this NVLink driver and an IGD driver could reject devices that they don't support, ie. NVIDIA GPUs where there's not the correct platform provided support or Intel GPUs without an OpRegion. Thanks, Alex