On Thu, Sep 12, 2024 at 08:31:00AM -0700, Nirmal Patel wrote: > On Thu, 12 Sep 2024 20:06:57 +0530 > Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> wrote: > > > On Thu, Aug 22, 2024 at 11:30:10AM -0700, Nirmal Patel wrote: > > > On Thu, 22 Aug 2024 15:18:06 +0530 > > > Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> wrote: > > > > > > > On Tue, Aug 20, 2024 at 03:32:13PM -0700, Nirmal Patel wrote: > > > > > VMD does not support INTx for devices downstream from a VMD > > > > > endpoint. So initialize the PCI_INTERRUPT_LINE to 0 for all NVMe > > > > > devices under VMD to ensure other applications don't try to set > > > > > up an INTx for them. > > > > > > > > > > Signed-off-by: Nirmal Patel <nirmal.patel@xxxxxxxxxxxxxxx> > > > > > > > > I shared a diff to put it in pci_assign_irq() and you said that > > > > you were going to test it [1]. I don't see a reply to that and > > > > now you came up with another approach. > > > > > > > > What happened inbetween? > > > > > > Apologies, I did perform the tests and the patch worked fine. > > > However, I was able to see lot of bridge devices had the register > > > set to 0xFF and I didn't want to alter them. > > > > You should've either replied to my comment or mentioned it in the > > changelog. > > > > > Also pci_assign_irg would still set the > > > interrupt line register to 0 with or without VMD. Since I didn't > > > want to introduce issues for non-VMD setup, I decide to keep the > > > change limited only to the VMD. > > > > > > > Sorry no. SPDK usecase is not specific to VMD and so is the issue. So > > this should be fixed in the PCI core as I proposed. What if another > > bridge also wants to do the same? > > Okay. Should I clear every device that doesn't have map_irq setup like > you mentioned in your suggested patch or keep it to NVMe or devices > with storage class code? > For all the devices. - Mani > -nirmal > > > > - Mani > > > > > -Nirmal > > > > > > > > - Mani > > > > > > > > [1] > > > > https://lore.kernel.org/linux-pci/20240801115756.0000272e@xxxxxxxxxxxxxxx > > > > > > > > > --- > > > > > v2->v1: Change the execution from fixup.c to vmd.c > > > > > --- > > > > > drivers/pci/controller/vmd.c | 13 +++++++++++++ > > > > > 1 file changed, 13 insertions(+) > > > > > > > > > > diff --git a/drivers/pci/controller/vmd.c > > > > > b/drivers/pci/controller/vmd.c index a726de0af011..2e9b99969b81 > > > > > 100644 --- a/drivers/pci/controller/vmd.c > > > > > +++ b/drivers/pci/controller/vmd.c > > > > > @@ -778,6 +778,18 @@ static int vmd_pm_enable_quirk(struct > > > > > pci_dev *pdev, void *userdata) return 0; > > > > > } > > > > > > > > > > +/* > > > > > + * Some applications like SPDK reads PCI_INTERRUPT_LINE to > > > > > decide > > > > > + * whether INTx is enabled or not. Since VMD doesn't support > > > > > INTx, > > > > > + * write 0 to all NVMe devices under VMD. > > > > > + */ > > > > > +static int vmd_clr_int_line_reg(struct pci_dev *dev, void > > > > > *userdata) +{ > > > > > + if(dev->class == PCI_CLASS_STORAGE_EXPRESS) > > > > > + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, > > > > > 0); > > > > > + return 0; > > > > > +} > > > > > + > > > > > static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long > > > > > features) { > > > > > struct pci_sysdata *sd = &vmd->sysdata; > > > > > @@ -932,6 +944,7 @@ static int vmd_enable_domain(struct vmd_dev > > > > > *vmd, unsigned long features) > > > > > pci_scan_child_bus(vmd->bus); > > > > > vmd_domain_reset(vmd); > > > > > + pci_walk_bus(vmd->bus, vmd_clr_int_line_reg, > > > > > &features); > > > > > /* When Intel VMD is enabled, the OS does not discover > > > > > the Root Ports > > > > > * owned by Intel VMD within the MMCFG space. > > > > > pci_reset_bus() applies -- > > > > > 2.39.1 > > > > > > > > > > > > > > > > > > > > -- மணிவண்ணன் சதாசிவம்