On Mon, Jul 29, 2024 at 01:08:59PM -0700, Nirmal Patel wrote: > On Thu, 25 Jul 2024 09:40:13 +0530 > Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> wrote: > > > On Wed, Jul 24, 2024 at 02:10:30PM -0500, Bjorn Helgaas wrote: > > > On Wed, Jul 24, 2024 at 10:00:40AM -0700, Nirmal Patel wrote: > > > > VMD does not support legacy interrupts for devices downstream > > > > from a VMD endpoint. So initialize the PCI_INTERRUPT_LINE to 0 > > > > for these devices to ensure we don't try to set up a legacy irq > > > > for them. > > > > > > s/legacy interrupts/INTx/ > > > s/legacy irq/INTx/ > > > > > > > Note: This patch was proposed by Jim, I am trying to upstream it. > > > > > > > > Signed-off-by: Jim Harris <james.r.harris@xxxxxxxxx> > > > > Signed-off-by: Nirmal Patel <nirmal.patel@xxxxxxxxxxxxxxx> > > > > --- > > > > arch/x86/pci/fixup.c | 14 ++++++++++++++ > > > > 1 file changed, 14 insertions(+) > > > > > > > > diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c > > > > index b33afb240601..a3b34a256e7f 100644 > > > > --- a/arch/x86/pci/fixup.c > > > > +++ b/arch/x86/pci/fixup.c > > > > @@ -653,6 +653,20 @@ static void quirk_no_aersid(struct pci_dev > > > > *pdev) DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, > > > > PCI_ANY_ID, PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid); > > > > > > > > +#if IS_ENABLED(CONFIG_VMD) > > > > +/* > > > > + * VMD does not support legacy interrupts for downstream devices. > > > > + * So PCI_INTERRPUT_LINE needs to be initialized to 0 to ensure > > > > OS > > > > + * doesn't try to configure a legacy irq. > > > > > > s/legacy interrupts/INTx/ > > > s/PCI_INTERRPUT_LINE/PCI_INTERRUPT_LINE/ > > > > > > > + */ > > > > +static void quirk_vmd_interrupt_line(struct pci_dev *dev) > > > > +{ > > > > + if (is_vmd(dev->bus)) > > > > + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, > > > > 0); +} > > > > +DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, > > > > quirk_vmd_interrupt_line); > > > > > > A quirk for every PCI device, even on systems without VMD, seems > > > like kind of a clumsy way to deal with this. > > > > > > Conceptually, I would expect a host bridge driver (VMD acts like a > > > host bridge in this case) to know whether it supports INTx, and if > > > the driver knows it doesn't support INTx or it has no _PRT or DT > > > description of INTx routing to use, an attempt to configure INTx > > > should just fail naturally. > > > > > > I don't claim this is how host bridge drivers actually work; I just > > > think it's the way they *should* work. > > > > > > > Absolutely! This patch is fixing the issue in a wrong place. There > > are existing DT based host bridge drivers that disable INTx due to > > lack of hardware capability. The driver just need to nullify > > pci_host_bridge::map_irq callback. > > > > - Mani > > > For VMD as a host bridge, pci_host_bridge::map_irq is null. Even all > VMD rootports' PCI_INTERRUPT_LINE registers are set to 0. If map_irq is already NULL, then how INTx is being configured? In your patch description: "So initialize the PCI_INTERRUPT_LINE to 0 for these devices to ensure we don't try to set up a legacy irq for them." Who is 'we'? For sure the PCI core wouldn't set INTx in your case. Does 'we' refer to device firmware? >Since VMD > doesn't explicitly set PCI_INTERRUPT_LINE register to 0 for all of its > sub-devices (i.e. NVMe), if some NVMes has non-zero value set for > PCI_INTERRUPT_LINE (i.e. 0xff) then some software like SPDK can read > it and make wrong assumption about INTx support. > Is this statement is true (I haven't heard of before), then don't we need to set PCI_INTERRUPT_LINE to 0 for all devices irrespective of host bridge? > Based Bjorn's and your suggestion, it might be better if VMD sets > PCI_INTERRUPT_LINE register for all of its sub-devices during VMD > enumeration. > What about hotplug devices? - Mani -- மணிவண்ணன் சதாசிவம்