Re: xhci_hcd HC died; cleaning up with TUSB7340 and µPD720201

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Monday 20 November 2017 07:01 PM, Roger Quadros wrote:
> On 20/11/17 15:19, Vignesh R wrote:
>>
>>
>> On Monday 20 November 2017 01:31 PM, Roger Quadros wrote:
>> [...]
>>>>
>>>> So, could you try reverting commit 8c934095fa2f3 and 
>>>> also apply below patch and let me know if that fixes the issue?
>>>>
>>>> -----------
>>>>
>>>> diff --git a/drivers/pci/dwc/pci-dra7xx.c b/drivers/pci/dwc/pci-dra7xx.c
>>>> index e77a4ceed74c..8280abc56f30 100644
>>>> --- a/drivers/pci/dwc/pci-dra7xx.c
>>>> +++ b/drivers/pci/dwc/pci-dra7xx.c
>>>> @@ -259,10 +259,17 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
>>>>         u32 reg;
>>>>  
>>>>         reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI);
>>>> +       dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg);
>>>>  
>>>>         switch (reg) {
>>>>         case MSI:
>>>> -               dw_handle_msi_irq(pp);
>>>> +               /*
>>>> +                * Need to make sure no MSI IRQs are pending before
>>>> +                * exiting handler, else the wrapper will not catch new
>>>> +                * IRQs. So loop around till dw_handle_msi_irq() returns
>>>> +                * IRQ_NONE
>>>> +                */
>>>> +               while (dw_handle_msi_irq(pp) != IRQ_NONE);
>>>
>>> To avoid this kind of looping, shouldn't we be disabling all IRQ events while
>>> the interrupt handler is running and enable them just before we return from the hardirq
>>> handler?
>>
>> IIUC, you are saying to disable all MSIs at PCIe designware core level,
>> then call dw_handle_msi_irq() and then enable MSIs after hardirq
>> returns. But, the problem is if PCIe EP raises another MSI after the
>> call to EP's handler but before re-enabling MSIs, then it will be
>> ignored as IRQs are not yet enabled.
>> Ideally, EP's support Per Vector Masking(PVM) which allow RC to prevent
>> EP from sending MSI messages for sometime. But, unfortunately, the cards
>> mentioned here don't support this feature.
> 
> I'm not aware of MSIs.
> 
> But for any typical hardware, there should be an interrupt event enable register and an
> interrupt mask register.
> 
> In the IRQ handler, we mask the interrupt but still keep the interrupt events enabled so that
> they can be latched during the time the interrupt was masked.
> 
> When the interrupt is unmasked at end of the IRQ handler, it should re-trigger the interrupt
> if any events were latched and pending.
> 
> This way you don't need to keep checking for any pending events in the IRQ handler.
> 

Thanks for the suggestion! I tried using interrupt masking at designware
level. But, unfortunately that does not help and my test cases still fail.
Seems like designware MSI status register is a non-masked status and
dra7xx specific wrapper seems to be relying on this non-masked status to
raise IRQ(instead of actual IRQ signal of designware) to CPU. There is
very little documentation in the TRM wrt how wrapper forwards designware
IRQ status to CPU.

So, at best, I will add a check in the above while() loop and break and
exit IRQ handler, lets say after 1K loops.


-- 
Regards
Vignesh



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux