On Tue, Jan 16, 2024 at 01:37:32PM -0700, Nirmal Patel wrote: > On Fri, 2024-01-12 at 16:55 -0600, Bjorn Helgaas wrote: > ... > > Maybe it will help if we can make the > > situation more concrete. I'm basing this on the logs at > > https://bugzilla.kernel.org/show_bug.cgi?id=215027. I assume the > > 10000:e0:06.0 Root Port and the 10000:e1:00.0 NVMe device are both > > passed through to the guest. I'm sure I got lots wrong here, so > > please correct me :) > > > > Host OS sees: > > > > PNP0A08 host bridge to 0000 [bus 00-ff] > > _OSC applies to domain 0000 > > OS owns [PCIeHotplug SHPCHotplug PME PCIeCapability LTR] in > > domain 0000 > > vmd 0000:00:0e.0: PCI host bridge to domain 10000 [bus e0-ff] > > no _OSC applies in domain 10000; > > host OS owns all PCIe features in domain 10000 > > pci 10000:e0:06.0: [8086:464d] # VMD root port > > pci 10000:e0:06.0: PCI bridge to [bus e1] > > pci 10000:e0:06.0: SltCap: HotPlug+ # Hotplug Capable > > pci 10000:e1:00.0: [144d:a80a] # nvme > > > > Guest OS sees: > > > > PNP0A03 host bridge to 0000 [bus 00-ff] > > _OSC applies to domain 0000 > > platform owns [PCIeHotplug ...] # _OSC doesn't grant > > to OS > > pci 0000:e0:06.0: [8086:464d] # VMD root port > > pci 0000:e0:06.0: PCI bridge to [bus e1] > > pci 0000:e0:06.0: SltCap: HotPlug+ # Hotplug Capable > > pci 0000:e1:00.0: [144d:a80a] # nvme > > > > So the guest OS sees that the VMD Root Port supports hotplug, but > > it can't use it because it was not granted ownership of the > > feature? > > You are correct about _OSC not granting access in Guest. I was assuming the VMD Endpoint itself was not visible in the guest and the VMD Root Ports appeared in domain 0000 described by the guest PNP0A03. The PNP0A03 _OSC would then apply to the VMD Root Ports. But my assumption appears to be wrong: > This is what I see on my setup. > > Host OS: > > ACPI: PCI Root Bridge [PC11] (domain 0000 [bus e2-fa]) > acpi PNP0A08:0b: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] > acpi PNP0A08:0b: _OSC: platform does not support [SHPCHotplug AER DPC] > acpi PNP0A08:0b: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] > PCI host bridge to bus 0000:e2 > > vmd 0000:e2:00.5: PCI host bridge to bus 10007:00 > vmd 0000:e2:00.5: Bound to PCI domain 10007 > > Guest OS: > > ACPI: PCI Root Bridge [PC0G] (domain 0000 [bus 03]) > acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] > acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] > acpi PNP0A08:01: _OSC: OS now controls [PCIeCapability] > > vmd 0000:03:00.0: Bound to PCI domain 10000 > > SltCap: PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise- Your example above suggests that the guest has a PNP0A08 device for domain 0000, with an _OSC, the guest sees the VMD Endpoint at 0000:03:00.0, and the VMD Root Ports and devices below them are in domain 10000. Right? If we decide the _OSC that covers the VMD Endpoint does *not* apply to the domain below the VMD bridge, the guest has no _OSC for domain 10000, so the guest OS should default to owning all the PCIe features in that domain. Bjorn