Re: [PATCH 2/3] pci: designware: add separate driver for the MSI part of the RC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 15 Feb 2020 09:35:56 +0000,
Ard Biesheuvel <ardb@xxxxxxxxxx> wrote:
> 
> (updated some email addresses in cc, including my own)
> 
> On Sat, 15 Feb 2020 at 01:54, Alan Mikhak <alan.mikhak@xxxxxxxxxx> wrote:
> >
> > Hi..
> >
> > What is the right approach for adding MSI support for the generic
> > Linux PCI host driver?
> >
> > I came across this patch which seems to address a similar
> > situation. It seems to have been dropped in v3 of the patchset
> > with the explanation "drop MSI patch [for now], since it
> > turns out we may not need it".
> >
> > [PATCH 2/3] pci: designware: add separate driver for the MSI part of the RC
> > https://lore.kernel.org/linux-pci/20170821192907.8695-3-ard.biesheuvel@xxxxxxxxxx/
> >
> > [PATCH v2 2/3] pci: designware: add separate driver for the MSI part of the RC
> > https://lore.kernel.org/linux-pci/20170824184321.19432-3-ard.biesheuvel@xxxxxxxxxx/
> >
> > [PATCH v3 0/2] pci: add support for firmware initialized designware RCs
> > https://lore.kernel.org/linux-pci/20170828180437.2646-1-ard.biesheuvel@xxxxxxxxxx/
> >
> 
> For the platform in question, it turned out that we could use the MSI
> block of the core's GIC interrupt controller directly, which is a much
> better solution.
> 
> In general, turning MSIs into wired interrupts is not a great idea,
> since the whole point of MSIs is that they are sufficiently similar to
> other DMA transactions to ensure that the interrupt won't arrive
> before the related memory transactions have completed.
>
> If your interrupt controller does not have this capability, then yes,
> you are stuck with this little widget that decodes an inbound write to
> a magic address and turns it into a wired interrupt.

I can only second this. It is much better to have a generic block
implementing MSI *in a non multiplexed way*, for multiple reasons:

- the interrupt vs DMA race that Ard mentions above,

- MSIs are very often used to describe the state of per-CPU queues. If
  you multiplex MSIs behind a single multiplexing interrupt, it is
  always the same CPU that gets interrupted, and you don't benefit
  from having multiple queues at all.

Even if you have to implement the support as a bunch of wired
interrupts, there is still a lot of value in keeping a 1:1 mapping
between MSIs and wires.

Thanks,

	M.

-- 
Jazz is not dead, it just smells funny.



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux