[+cc Lorenzo] On Tue, Nov 16, 2021 at 03:11:36PM -0700, Nirmal Patel wrote: > During VT-d pass-through, the VMD driver occasionally fails to > enumerate underlying NVMe devices when repetitive reboots are > performed in the guest OS. The issue can be resolved by resetting > VMD root ports for proper enumeration and triggering secondary bus > reset which will also propagate reset through downstream bridges. Hmm. Does not say what the root cause is, just that it can be "fixed" by a reset, but OK. > Signed-off-by: Nirmal Patel <nirmal.patel@xxxxxxxxxxxxxxx> > Reviewed-by: Jon Derrick <jonathan.derrick@xxxxxxxxx> > --- > --- > v4->v5: Fixing small nitpick fix. > v3->v4: Using pci_reset_bus function for secondary bus reset instead of > manually triggering secondary bus reset, addressing review > comments of v3. > v2->v3: Combining two functions into one, Remove redundant definations > and Formatting fixes > > drivers/pci/controller/vmd.c | 37 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 37 insertions(+) > > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c > index a5987e52700e..a905fce6232f 100644 > --- a/drivers/pci/controller/vmd.c > +++ b/drivers/pci/controller/vmd.c > @@ -498,6 +498,40 @@ static inline void vmd_acpi_begin(void) { } > static inline void vmd_acpi_end(void) { } > #endif /* CONFIG_ACPI */ > > +static void vmd_domain_reset(struct vmd_dev *vmd) > +{ > + u16 bus, max_buses = resource_size(&vmd->resources[0]); > + u8 dev, functions, fn, hdr_type; > + char __iomem *base; I don't understand what's going on here. > + for (bus = 0; bus < max_buses; bus++) { > + for (dev = 0; dev < 32; dev++) { > + base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, > + PCI_DEVFN(dev, 0), 0); > + > + hdr_type = readb(base + PCI_HEADER_TYPE) & > + PCI_HEADER_TYPE_MASK; > + functions = (hdr_type & 0x80) ? 8 : 1; This look like an open-coded version of pci_hdr_type() for every possible device on every possible bus below the VMD, regardless of whether that device actually exists. So most of these reads will result in Unsupported Request errors and 0xff data returns, which you interpret as a multi-function device. > + for (fn = 0; fn < functions; fn++) { > + base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, > + PCI_DEVFN(dev, fn), 0); > + > + hdr_type = readb(base + PCI_HEADER_TYPE) & > + PCI_HEADER_TYPE_MASK; So you open-code pci_hdr_type() again, for lots of functions that don't exist. > + if (hdr_type != PCI_HEADER_TYPE_BRIDGE || > + (readw(base + PCI_CLASS_DEVICE) != > + PCI_CLASS_BRIDGE_PCI)) > + continue; This at least will skip the rest for functions that don't exist, since hdr_type will be 0x7f (not 0x01, PCI_HEADER_TYPE_BRIDGE). > + > + memset_io(base + PCI_IO_BASE, 0, > + PCI_ROM_ADDRESS1 - PCI_IO_BASE); And here you clear the base & limit registers of the IO and MEM windows of each bridge, again with basically a hand-written special case of pci_write_config_*(). This looks like you're trying to disable the windows. AFAICT, the spec doesn't say what happens to the window base/limit registers after reset, but it does say that reset clears the I/O Space Enable and Memory Space Enable in the Command Register. For bridges, that will disable the windows. Writing zero to both base & limit does not disable the windows; it just sets them to [io 0x0000-0x0fff], [mem 0x00000000-0x000fffff], etc. So I'm not really convinced that this function is necessary at all, given that you do a secondary bus reset on every VMD root port right afterwards. > + } > + } > + } > +} > + > static void vmd_attach_resources(struct vmd_dev *vmd) > { > vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1]; > @@ -801,6 +835,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) > vmd_acpi_begin(); > > pci_scan_child_bus(vmd->bus); > + vmd_domain_reset(vmd); > + list_for_each_entry(child, &vmd->bus->children, node) > + pci_reset_bus(child->self); > pci_assign_unassigned_bus_resources(vmd->bus); > > /* > -- > 2.27.0 >