Re: [PATCH] PCI: Do not use pcie_get_speed_cap() to determine when to start waiting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 16, 2020 at 11:32:45AM +0300, Mika Westerberg wrote:
> Kai-Heng Feng reported that it takes long time (>1s) to resume
> Thunderbolt connected PCIe devices from both runtime suspend and system
> sleep (s2idle).
> 
> These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> announces support for speeds > 5 GT/s but it is then capped by the
> second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> it ended up waiting for 1100 ms as these ports do not support active
> link layer reporting either.
> 
> PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
> before sending configuration request to the device below, if the port
> does not support speeds > 5 GT/s, and if it does we first need to wait
> for the data link layer to become active before waiting for that 100 ms.
> 
> PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
> that support speeds > 5 GT/s must support active link layer reporting so
> instead of looking for the speed we can check for the active link layer
> reporting capability and determine how to wait based on that (as they go
> hand in hand).

I can't quite tell what the defect is here.

I assume you're talking about this text from sec 6.6.1:

  - With a Downstream Port that does not support Link speeds greater
    than 5.0 GT/s, software must wait a minimum of 100 ms before
    sending a Configuration Request to the device immediately below
    that Port.

  - With a Downstream Port that supports Link speeds greater than 5.0
    GT/s, software must wait a minimum of 100 ms after Link training
    completes before sending a Configuration Request to the device
    immediately below that Port. Software can determine when Link
    training completes by polling the Data Link Layer Link Active bit
    or by setting up an associated interrupt (see Section 6.7.3.3 ).

I don't understand what Link Control 2 has to do with this.  The spec
talks about ports *supporting* certain link speeds, which sounds to me
like the Link Capabilities.  It doesn't say anything about the current
or target link speed.

> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
> Reported-by: Kai-Heng Feng <kai.heng.feng@xxxxxxxxxxxxx>
> Tested-by: Kai-Heng Feng <kai.heng.feng@xxxxxxxxxxxxx>
> Signed-off-by: Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx>
> ---
>  drivers/pci/pci.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 595fcf59843f..d9d9ff5b968e 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4822,7 +4822,13 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>  	if (!pcie_downstream_port(dev))
>  		return;
>  
> -	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> +	/*
> +	 * Since PCIe spec mandates that all downstream ports that support
> +	 * speeds greater than 5 GT/s must support data link layer active
> +	 * reporting so we use that here to determine when the delay should
> +	 * be issued.
> +	 */
> +	if (!dev->link_active_reporting) {
>  		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
>  		msleep(delay);
>  	} else {
> -- 
> 2.25.1
> 



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux