> On Mar 12, 2020, at 16:15, Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx> wrote: > > On Thu, Mar 12, 2020 at 12:41:08PM +0800, Kai-Heng Feng wrote: >> >> >>> On Mar 11, 2020, at 18:38, Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx> wrote: >>> >>> On Wed, Mar 11, 2020 at 01:39:51PM +0800, Kai-Heng Feng wrote: >>>> Hi, >>>> >>>> I am currently investigating long suspend and resume time of suspend-to-idle. >>>> It's because Thunderbolt bridges need to wait for 1100ms [1] for runtime-resume on system suspend, and also for system resume. >>>> >>>> I made a quick hack to the USB driver and xHCI driver to support direct-complete, but I failed to do so for the parent PCIe bridge as it always disables the direct-complete [2], since device_may_wakeup() returns true for the device: >>>> >>>> /* Avoid direct_complete to let wakeup_path propagate. */ >>>> if (device_may_wakeup(dev) || dev->power.wakeup_path) >>>> dev->power.direct_complete = false; >>> >>> You need to be careful here because otherwise you end up situation where >>> the link is not properly trained and we tear down the whole tree of >>> devices which is worse than waiting bit more for resume. >> >> My idea is to direct-complete when there's no PCI or USB device >> plugged into the TBT, and use pm_reuqest_resume() in complete() so it >> won't block resume() or resume_noirq(). > > Before doing that.. > >>>> Once the direct-complete is disabled, system suspend/resume is used hence the delay in [1] is making the resume really slow. >>>> So how do we make suspend-to-idle faster? I have some ideas but I am not sure if they are feasible: >>>> - Make PM core know the runtime_suspend() already use the same wakeup as suspend(), so it doesn't need to use device_may_wakeup() check to determine direct-complete. >>>> - Remove the DPM_FLAG_NEVER_SKIP flag in pcieport driver, and use pm_request_resume() in its complete() callback to prevent blocking the resume process. >>>> - Reduce the 1100ms delay. Maybe someone knows the values used in macOS and Windows... >>> >>> Which system this is? ICL? >> >> CML-H + Titan Ridge. > > .. we should really understand this better because CML-H PCH root ports > and Titan/Alpine Ridge downstream ports all support active link > reporting so instead of the 1000+100ms you should see something like > this: Root port for discrete graphics: # lspci -vvnn -s 00:01.0 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 02) (prog-if 00 [Normal decode]) Capabilities: [a0] Express (v2) Root Port (Slot+), MSI 00 LnkCap: Port #2, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <256ns, L1 <8us ClockPM- Surprise- LLActRep- BwNot+ ASPMOptComp+ LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- Thunderbolt ports: # lspci -vvvv -s 04:00 04:00.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] [8086:15e7] (rev 06) (prog-if 00 [Normal decode]) Capabilities: [c0] Express (v2) Downstream Port (Slot+), MSI 00 LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L1, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot+ ASPMOptComp+ LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- # lspci -vvnn -s 04:01 04:01.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] [8086:15e7] (rev 06) (prog-if 00 [Normal decode]) Capabilities: [c0] Express (v2) Downstream Port (Slot+), MSI 00 LnkCap: Port #1, Speed 2.5GT/s, Width x4, ASPM L1, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+ LnkCtl: ASPM L1 Enabled; Disabled- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- # lspci -vvnn -s 04:02 04:02.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] [8086:15e7] (rev 06) (prog-if 00 [Normal decode]) Capabilities: [c0] Express (v2) Downstream Port (Slot+), MSI 00 LnkCap: Port #2, Speed 2.5GT/s, Width x4, ASPM L1, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot+ ASPMOptComp+ LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- So both CML-H PCH and TBT ports report "LLActRep-". > > 1. Wait for the link + 100ms for the root port > 2. Wait for the link + 100ms for the Titan Ridge downstream ports > (these are run paraller wrt all Titan Ridge downstream ports that have > something connected) > > If there is a TBT device connected then 2. is repeated for it and so on. > > So the 1000ms+ is really unexpected. Are you running mainline kernel and > if so, can you share dmesg with CONFIG_PCI_DEBUG=y so we can see the > delays there? Maybe also add some debugging to > pcie_wait_for_link_delay() where it checks for the > !pdev->link_active_reporting and waits for 1100ms. I added the debug log in another thread and it does reach !pdev->link_active_reporting. Let me see if patch link active reporting for the ports in PCI quirks can help. Kai-Heng