Hi, A while ago I realised that I was having all kinds off issues with my connection, ~933 mbit had become ~40 mbit This only applied on links to the internet (via a linux fw running NAT) however while debugging with the help of Alexander Duyck we realised that ASPM could be the culprit (at least disabling ASPM on the nic it self made things work just fine)... So while trying to understand PCIe and such things, I found this: The calculations of the max delay looked at "that node" + start latency * "hops" But one hop might have a larger latency and break the acceptable delay... So after a lot playing around with the code, i ended up with this, and it seems to fix my problem and does set two pcie bridges to ASPM Disabled that didn't happen before. I do however have questions.... Shouldn't the change be applied to the endpoint? Or should it be applied recursively along the path to the endpoint? Also, the L0S checks are only done on the local links, is this correct? diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index b17e5ffd31b1..bd53fba7f382 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev, static void pcie_aspm_check_latency(struct pci_dev *endpoint) { - u32 latency, l1_switch_latency = 0; + u32 latency, l1_max_latency = 0, l1_switch_latency = 0; struct aspm_latency *acceptable; struct pcie_link_state *link; @@ -470,8 +470,9 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint) * substate latencies (and hence do not do any check). */ latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1); + l1_max_latency = max_t(u32, latency, l1_max_latency); if ((link->aspm_capable & ASPM_STATE_L1) && - (latency + l1_switch_latency > acceptable->l1)) + (l1_max_latency + l1_switch_latency > acceptable->l1)) link->aspm_capable &= ~ASPM_STATE_L1; l1_switch_latency += 1000;