Hi, On Wed, Mar 17, 2021 at 9:31 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > > On Tue, Mar 16, 2021 at 10:31 PM Lukas Wunner <lukas@xxxxxxxxx> wrote: > > > > On Tue, Mar 16, 2021 at 10:08:31PM -0700, Dan Williams wrote: > > > On Tue, Mar 16, 2021 at 9:14 PM Lukas Wunner <lukas@xxxxxxxxx> wrote: > > > > > > > > On Fri, Mar 12, 2021 at 07:32:08PM -0800, sathyanarayanan.kuppuswamy@xxxxxxxxxxxxxxx wrote: > > > > > + if ((events == PCI_EXP_SLTSTA_DLLSC) && is_dpc_reset_active(pdev)) { > > > > > + ctrl_info(ctrl, "Slot(%s): DLLSC event(DPC), skipped\n", > > > > > + slot_name(ctrl)); > > > > > + ret = IRQ_HANDLED; > > > > > + goto out; > > > > > + } > > > > > > > > Two problems here: > > > > > > > > (1) If recovery fails, the link will *remain* down, so there'll be > > > > no Link Up event. You've filtered the Link Down event, thus the > > > > slot will remain in ON_STATE even though the device in the slot is > > > > no longer accessible. That's not good, the slot should be brought > > > > down in this case. > > > > > > Can you elaborate on why that is "not good" from the end user > > > perspective? From a driver perspective the device driver context is > > > lost and the card needs servicing. The service event starts a new > > > cycle of slot-attention being triggered and that syncs the slot-down > > > state at that time. > > > > All of pciehp's code assumes that if the link is down, the slot must be > > off. A slot which is in ON_STATE for a prolonged period of time even > > though the link is down is an oddity the code doesn't account for. > > > > If the link goes down, the slot should be brought into OFF_STATE. > > (It's okay though to delay bringdown until DPC recovery has completed > > unsuccessfully, which is what the patch I'm proposing does.) > > > > I don't understand what you mean by "service event". Someone unplugging > > and replugging the NVMe drive? > > Yes, service meaning a technician physically removes the card. > > > > > > > > > (2) If recovery succeeds, there's a race where pciehp may call > > > > is_dpc_reset_active() *after* dpc_reset_link() has finished. > > > > So both the DPC Trigger Status bit as well as pdev->dpc_reset_active > > > > will be cleared. Thus, the Link Up event is not filtered by pciehp > > > > and the slot is brought down and back up even though DPC recovery > > > > was succesful, which seems undesirable. > > > > > > The hotplug driver never saw the Link Down, so what does it do when > > > the slot transitions from Link Up to Link Up? Do you mean the Link > > > Down might fire after the dpc recovery has completed if the hotplug > > > notification was delayed? > > > > If the Link Down is filtered and the Link Up is not, pciehp will > > bring down the slot and then bring it back up. That's because pciehp > > can't really tell whether a DLLSC event is Link Up or Link Down. > > > > It just knows that the link was previously up, is now up again, > > but must have been down intermittently, so transactions to the > > device in the slot may have been lost and the slot is therefore > > brought down for safety. Because the link is up, it is then > > brought back up. > > I wonder why we're not seeing that effect in testing? In our test case, there is a good chance that the LINK UP event is also filtered. We change the dpc_reset_active status only after we verify the link is up. So if hotplug handler handles the LINK UP event before we change the status of dpc_reset_active, then it will not lead to the issue mentioned by Lukas. if (!pcie_wait_for_link(pdev, true)) { pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n"); - return PCI_ERS_RESULT_DISCONNECT; + status = PCI_ERS_RESULT_DISCONNECT; } - return PCI_ERS_RESULT_RECOVERED; + atomic_dec_return_release(&pdev->dpc_reset_active); -- Sathyanarayanan Kuppuswamy Linux Kernel Developer