On Fri, May 15, 2020 at 12:55:35PM +0300, Mika Westerberg wrote: > On Thu, May 14, 2020 at 05:41:32PM -0500, Bjorn Helgaas wrote: > > On Thu, May 14, 2020 at 04:30:43PM +0300, Mika Westerberg wrote: > > > Kai-Heng Feng reported that it takes long time (>1s) to resume > > > Thunderbolt connected PCIe devices from both runtime suspend and system > > > sleep (s2idle). > > > > > > These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2) > > > announces support for speeds > 5 GT/s but it is then capped by the > > > second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This > > > possiblity was not considered in pci_bridge_wait_for_secondary_bus() so > > > it ended up waiting for 1100 ms as these ports do not support active > > > link layer reporting either. > > > > I don't think PCI_EXP_LNKCTL2 is relevant here. I think the lack of > > Data Link Layer Link Active is just a chip erratum. Is that > > documented anywhere? > > I think it is relevant because if you hard-code (hardware) LNKCTL2 to > always target 2.5GT/s then it effectively does not need to implement > data link layer active because the link speed never goes higher than > that. I don't think it's reasonable to expect software to check Link Capabilities 2, then try to write Link Control 2 and figure out whether the target speed is hard-wired. I think these devices are just broken (at least per spec). > > > @@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, > > > msleep(10); > > > timeout -= 10; > > > } > > > > I think maybe the code above (not included in the context here) should > > say: > > > > if (!pdev->link_active_reporting) { > > msleep(timeout + delay); > > return true; > > } > > > > to match the rest of 4827d63891b6 ("PCI/PM: Add > > pcie_wait_for_link_delay()"). What do you think? If you agree, I'd > > make that a separate patch because it's not related to the current > > fix. > > Yes, I agree. OK, I added a patch to do this and applied both to pci/pm for v5.8. Thanks! Bjorn