Re: [PATCH] PCI: pciehp: Clear LBMS on hot-remove to prevent link speed reduction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilpo,

On 4/24/2024 2:32 AM, Ilpo Järvinen wrote:
On Wed, 24 Apr 2024, Smita Koralahalli wrote:

Clear Link Bandwidth Management Status (LBMS) if set, on a hot-remove event.

The hot-remove event could result in target link speed reduction if LBMS
is set, due to a delay in Presence Detect State Change (PDSC) happening
after a Data Link Layer State Change event (DLLSC).

In reality, PDSC and DLLSC events rarely come in simultaneously. Delay in
PDSC can sometimes be too late and the slot could have already been
powered down just by a DLLSC event. And the delayed PDSC could falsely be
interpreted as an interrupt raised to turn the slot on. This false process
of powering the slot on, without a link forces the kernel to retrain the
link if LBMS is set, to a lower speed to restablish the link thereby
bringing down the link speeds [2].

According to PCIe r6.2 sec 7.5.3.8 [1], it is derived that, LBMS cannot
be set for an unconnected link and if set, it serves the purpose of
indicating that there is actually a device down an inactive link.
However, hardware could have already set LBMS when the device was
connected to the port i.e when the state was DL_Up or DL_Active. Some
hardwares would have even attempted retrain going into recovery mode,
just before transitioning to DL_Down.

Thus the set LBMS is never cleared and might force software to cause link
speed drops when there is no link [2].

Dmesg before:
	pcieport 0000:20:01.1: pciehp: Slot(59): Link Down
	pcieport 0000:20:01.1: pciehp: Slot(59): Card present
	pcieport 0000:20:01.1: broken device, retraining non-functional downstream link at 2.5GT/s
	pcieport 0000:20:01.1: retraining failed
	pcieport 0000:20:01.1: pciehp: Slot(59): No link

Dmesg after:
	pcieport 0000:20:01.1: pciehp: Slot(59): Link Down
	pcieport 0000:20:01.1: pciehp: Slot(59): Card present
	pcieport 0000:20:01.1: pciehp: Slot(59): No link

[1] PCI Express Base Specification Revision 6.2, Jan 25 2024.
     https://members.pcisig.com/wg/PCI-SIG/document/20590
[2] Commit a89c82249c37 ("PCI: Work around PCIe link training failures")

Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@xxxxxxx>
---
1. Should be based on top of fixes for link retrain status in
pcie_wait_for_link_delay()
https://patchwork.kernel.org/project/linux-pci/list/?series=824858
https://lore.kernel.org/linux-pci/53b2239b-4a23-a948-a422-4005cbf76148@xxxxxxxxxxxxxxx/

Without the fixes patch output would be:
	pcieport 0000:20:01.1: pciehp: Slot(59): Link Down
	pcieport 0000:20:01.1: pciehp: Slot(59): Card present
	pcieport 0000:20:01.1: broken device, retraining non-functional downstream link at 2.5GT/s
	pcieport 0000:20:01.1: retraining failed
	pcieport 0000:20:01.1: pciehp: Slot(59): No device found.

Did you hit the 60 sec delay issue without series 824858? If you've tested
them and the fixes helped your case, could you perhaps give Tested-by for
that series too (in the relevant thread)?

I'm assuming the 60s delay issue is from pci_dev_wait()?

Correct me if I'm wrong.
I think series 824858 potentially fixes the bug at two different places. What you are seeing is at suspend/resume operation called from the calls below.

pci_pm_runtime_resume()
    pci_pm_bridge_power_up_actions()
      pci_bridge_wait_for_secondary_bus()
        pcie_wait_for_link_delay()
          pcie_failed_link_retrain()
        pci_dev_wait()

But series 824858 helped me in properly returning an error code from pcie_wait_for_link_delay() and also avoiding the 100ms delay inside pcie_wait_for_link_delay() and probably the timeout in pcie_wait_for_presence()..

The sequence of operations which I'm looking at is after an PDSC event as below:
pciehp_handle_presence_or_link_change()
  pciehp_enable_slot()
    __pciehp_enable_slot()
      board_added()
        pciehp_check_link_status()
          pcie_wait_for_link()
            pcie_wait_for_link_delay()
              pcie_failed_link_retrain()

pcie_failed_link_retrain() would initially return false on a "failed link retrain" attempt which would make pcie_wait_for_link_delay() and pcie_wait_for_link() to erroneously succeed thereby unnecessarily proceeding with other checks in pciehp_check_link_status().

Series 824858 fixes the bug by properly returning an error code.

However, I had missed looking at your patchset when I initially wrote
this. From the patch below I see you have addressed clearing LBMS as well but at a different place. But I didn't understand why was it dropped.

https://lore.kernel.org/all/20240129112710.2852-2-ilpo.jarvinen@xxxxxxxxxxxxxxx/

From what I understood while experimenting is, tracking the set/unset behavior of LBMS is hard, as HW has the right to attempt retrain at any point except when the status is DL_Down as per the statement from the the SPEC: "This bit is Set by hardware to indicate that either of the following has occurred without the Port transitioning through DL_Down status".

The set LBMS when the port was not in DL_Down, is never unset after the Port is transitioned to DL_Down. So, I think clearing it after the port status is DL_Down (which I assume happens after DLLSC interrupt fires to bring the slot down) makes it remain cleared only until the port remains at DL_Down state. As soon as the port transitions to other states (I don't know how SW could track the states) there is no guarantee that the bit is still clear as HW might have attempted retrain.

The only way would be to track the DL_Down and Active/Up states. I at the moment don't know how to do it or if it is possible to do it in SW in the first place. Hence, I'm more inclined pushing [2] below as a fix for link speed drop. However, that has some complexities as well :(

After talking to HW folks one place where our HW sets LBMS is:

Device is removed.
  DL_Up doesn't immediately change.
    HW will just see errors on the receiver and doesn't automatically
    know that it was because of a hot remove, and tries to recover the
    link.
    LBMS gets set here as rate change is attempted.
    DL_Down occurs after this.

Once DL_Down occurs nobody is supposed to set the bit. But the set bit is never cleared and is creating all issues.

HW mostly considers other parameters and LTSSM behaviors in transitioning between Active/Up to Down states which I'm not sure at the moment how much of it is transparent to OS. :/


2. I initially attempted to wait for both events PDSC and DLLSC to happen
and then turn on the slot.
Similar to: https://lore.kernel.org/lkml/20190205210701.25387-1-mr.nuke.me@xxxxxxxxx/
but before turning on the slot.

Something like:
-		ctrl->state = POWERON_STATE;
-		mutex_unlock(&ctrl->state_lock);
-		if (present)
+		if (present && link_active) {
+			ctrl->state = POWERON_STATE;
+			mutex_unlock(&ctrl->state_lock);
			ctrl_info(ctrl, "Slot(%s): Card present\n",
				  slot_name(ctrl));
-		if (link_active)
			ctrl_info(ctrl, "Slot(%s): Link Up\n",
				  slot_name(ctrl));
-		ctrl->request_result = pciehp_enable_slot(ctrl);
-		break;
+			ctrl->request_result = pciehp_enable_slot(ctrl);
+			break;
+		}
+		else {
+			mutex_unlock(&ctrl->state_lock);
+			break;
+		}

This would also avoid printing the lines below on a remove event.
pcieport 0000:20:01.1: pciehp: Slot(59): Card present
pcieport 0000:20:01.1: pciehp: Slot(59): No link

I understand this would likely be not applicable in places where broken
devices hardwire PDS to zero and PDSC would never happen. But I'm open to
making changes if this is more applicable. Because, SW cannot directly
track the interference of HW in attempting link retrain and setting LBMS.

3. I tried introducing delay similar to pcie_wait_for_presence() but I
was not successful in picking the right numbers. Hence hit with the same
link speed drop.

4. For some reason I was unable to clear LBMS with:
	pcie_capability_clear_word(ctrl->pcie->port, PCI_EXP_LNKSTA,
				   PCI_EXP_LNKSTA_LBMS);

LBMS is write-1-to-clear, pcie_capability_clear_word() tries to write 0
there (the accessor doesn't do what you seem to expect, it clears normal
bits, not write-1-to-clear bits).

Got it thanks!


---
  drivers/pci/hotplug/pciehp_pci.c | 8 +++++++-
  1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c
index ad12515a4a12..9155fdfd1d37 100644
--- a/drivers/pci/hotplug/pciehp_pci.c
+++ b/drivers/pci/hotplug/pciehp_pci.c
@@ -92,7 +92,7 @@ void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
  {
  	struct pci_dev *dev, *temp;
  	struct pci_bus *parent = ctrl->pcie->port->subordinate;
-	u16 command;
+	u16 command, lnksta;
ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n",
  		 __func__, pci_domain_nr(parent), parent->number);
@@ -134,4 +134,10 @@ void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
  	}
pci_unlock_rescan_remove();
+
+	/* Clear LBMS on removal */
+	pcie_capability_read_word(ctrl->pcie->port, PCI_EXP_LNKSTA, &lnksta);
+	if (lnksta & PCI_EXP_LNKSTA_LBMS)
+		pcie_capability_write_word(ctrl->pcie->port, PCI_EXP_LNKSTA,
+					   PCI_EXP_LNKSTA_LBMS);

It's enough to unconditionally write PCI_EXP_LNKSTA_LBMS, no need to
check first. The comment is just spelling out what can already be read
from the code so I'd drop the comment.

Sure, I will make changes once I send v2 and if we consider to address it this way.. :)


I agree it makes sense to clear the LBMS when device is removed.

Thank you!


Thanks,
Smita




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux