On 9/27/22 09:46, Martin Tůma wrote:
On 27. 09. 22 18:28, Lizhi Hou wrote:
Okay, I got the point. How about changing request/remove APIs to
enable/disable APIs as below
xdma_enable_user_irq(struct platform_device *pdev, u32
user_irq_index, u32 *irq)
user_irq_index: user logic interrupt wire index. (XDMA
driver determines how system IRQs are mapped to DMA channels and user
logic wires)
irq: IRQ number returned for registering interrupt
handler (request_irq()) or passing to existing platform driver.
xdma_disable_user_irq(struct platform_device *pdev, u32
user_irq_index)
Does this make sense to you?
I think even the "irq" parameter in the enable function is surplus as
the parent driver (the driver of the actual PCIe card) knows* what PCI
irq he has to allocate without XDMA providing the number.
xdma_enable_user_irq(struct platform_device *pdev, u32 user_irq_index);
xdma_disable_user_irq(struct platform_device *pdev, u32 user_irq_index);
should be all that is needed.
M.
* something like:
pci_irq_vector((pdev), PCI_BAR_ID) + NUM_C2H_CHANNELS + NUM_H2C_CHANNELS
can be used from the PCIe driver
How does parent driver know the first few vectors will be assigned to
DMA channel? Parent diver should not assume the first
(NUM_C2H_CHANNELS+NUM_H2C_CHANNELS) are for DMA channel.
Parent driver passes the system IRQ range to XDMA driver, and only XDMA
driver knows what IRQs are used by DMA channel and what IRQs are mapped
to user logic wires. I would keep the "u32 *irq" argument.
Thanks,
Lizhi