Re: [PATCH V5 3/3] PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 24, 2023 at 06:40:58AM +0000, Havalige, Thippeswamy wrote:
> > From: Bjorn Helgaas <helgaas@xxxxxxxxxx>
> > On Thu, Jul 20, 2023 at 06:37:03AM +0000, Havalige, Thippeswamy wrote:
> > > > From: Bjorn Helgaas <helgaas@xxxxxxxxxx> ...
> > > > On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> > > > > Add support for Xilinx XDMA Soft IP core as Root Port.
> > > > > ...

> > If you have more detail about the "error interrupt," that would be
> > useful as well.  Does this refer to an AER interrupt, a "System
> > Error", something else?  I'm looking at the diagram in PCIe r6.0,
> > Figure 6-3, wondering if this is related to anything there.  I
> > suppose likely it's some Xilinx-specific thing?
> 
> - Agreed, ll modify Legacy to INTx, and regarding error interrupts
> these are Xilinx controller specific interrupts which are used to
> notify the user about errors such as cfg timeout, slave unsupported
> requests,Fatal and non fatal error.

This would be great material for comments and/or a revised commit log.

> > > > > +	/* Plug the INTx chained handler */
> > > > > +	irq_set_chained_handler_and_data(port->intx_irq,
> > > > > +					 xilinx_pl_dma_pcie_intx_flow, port);
> > > > > +
> > > > > +	/* Plug the main event chained handler */
> > > > > +	irq_set_chained_handler_and_data(port->irq,
> > > > > +					 xilinx_pl_dma_pcie_event_flow,
> > > > port);
> > > >
> > > > What's the reason for using chained IRQs?  Can this be done without
> > > > them?  I don't claim to understand all the issues here, but it seems
> > > > better to avoid chained IRQ handlers when possible:
> > > > https://lore.kernel.org/all/877csohcll.ffs@tglx/
> > 
> > > - As per the comments in this
> > > https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@nanos/
> > > T/ "It is fine to have chained interrupts when bootloader, device tree
> > > and kernel under control. Only if BIOS/UEFI comes into play the user
> > > is helpless against interrupt storm which will cause system to hangs."
> > >
> > > We are using ARM embedded platform with Bootloader, Devicetree flow.
> > 
> > I read Thomas' comments as "in general it's better to use regular
> > interrupts, but we can live with chained interrupts if we have
> > control of bootloader, device tree, and kernel."
> > 
> > I guess my questions are more like:
> > 
> >   - Could this be done with either chained interrupts or regular
> >     interrupts?
> >  - If so, what is the advantage to using chained interrupts?

> With regular interrupts, these interrupts are self-consumed
> interrupts (interrupt is handled within driver) but where as chained
> interrupts are not self consumed (interrupts are not handled within
> the driver, but forwarded to different driver for which the actual
> interrupt is raised) but these interrupts are demultiplexed and
> forwards interrupt to another subsystem by calling
> generic_handle_irq(). 
> 
> As, MSI generic handlers are consumed by Endpoints and end point
> drivers, chained handlers forward the interrupt to the specific EP
> driver (For example NVME subsystem or any other subsystem).

This doesn't really explain it for me, probably because of my IRQ
ignorance.

I compared xilinx_pl_dma (which uses chained interrupts) with
pci-aardvark.c (which does not).

  - xilinx_pl_dma_pcie_setup_irq() calls platform_get_irq(0) once and
    sets up xilinx_pl_dma_pcie_event_flow() as the handler.

  - advk_pcie_probe() calls platform_get_irq(0) once and sets up
    advk_pcie_irq_handler() as the handler.

  - xilinx_pl_dma_pcie_event_flow() reads XILINX_PCIE_DMA_REG_IDR to
    learn which interrupts are pending and calls
    generic_handle_domain_irq() for each.

  - advk_pcie_irq_handler() calls advk_pcie_handle_int(), which reads
    PCIE_ISR0_REG and PCIE_ISR1_REG to learn which interrupts are
    pending and calls generic_handle_domain_irq() for each.

It seems like both drivers do essentially the same thing, but
xilinx_pl_dma_pcie_event_flow() is a chained handler and
advk_pcie_irq_handler() is not.

Is there some underlying difference in the way the hardware works that
means xilinx_pl_dma needs a chained handler while aardvark does not?

Bjorn



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux