On Sun, 2024-12-15 at 11:20 +0100, Lukas Wunner wrote: > If a PCIe port only supports a single speed, enabling bandwidth control > is pointless: There's no need to monitor autonomous speed changes, nor > can the speed be changed. > > Not enabling it saves a small amount of memory and compute resources, > but also fixes a boot hang reported by Niklas: It occurs when enabling > bandwidth control on Downstream Ports of Intel JHL7540 "Titan Ridge 2018" > Thunderbolt controllers. The ports only support 2.5 GT/s in accordance > with USB4 v2 sec 11.2.1, so the present commit works around the issue. > > PCIe r6.2 sec 8.2.1 prescribes that: > > "A device must support 2.5 GT/s and is not permitted to skip support > for any data rates between 2.5 GT/s and the highest supported rate." > > Consequently, bandwidth control is currently only disabled if a port > doesn't support higher speeds than 2.5 GT/s. However the Implementation > Note in PCIe r6.2 sec 7.5.3.18 cautions: > > "It is strongly encouraged that software primarily utilize the > Supported Link Speeds Vector instead of the Max Link Speed field, > so that software can determine the exact set of supported speeds on > current and future hardware. This can avoid software being confused > if a future specification defines Links that do not require support > for all slower speeds." > > In other words, future revisions of the PCIe Base Spec may allow gaps > in the Supported Link Speeds Vector. To be future-proof, don't just > check whether speeds above 2.5 GT/s are supported, but rather check > whether *more than one* speed is supported. > > Fixes: 665745f27487 ("PCI/bwctrl: Re-add BW notification portdrv as PCIe BW controller") > Reported-by: Niklas Schnelle <niks@xxxxxxxxxx> > Closes: https://lore.kernel.org/r/db8e457fcd155436449b035e8791a8241b0df400.camel@xxxxxxxxxx/ > Signed-off-by: Lukas Wunner <lukas@xxxxxxxxx> > Cc: Ilpo Järvinen <ilpo.jarvinen@xxxxxxxxxxxxxxx> > --- > drivers/pci/pcie/portdrv.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c > index 5e10306b6308..02e73099bad0 100644 > --- a/drivers/pci/pcie/portdrv.c > +++ b/drivers/pci/pcie/portdrv.c > @@ -265,12 +265,14 @@ static int get_port_device_capability(struct pci_dev *dev) > (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER))) > services |= PCIE_PORT_SERVICE_DPC; > > + /* Enable bandwidth control if more than one speed is supported. */ > if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || > pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { > u32 linkcap; > > pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap); > - if (linkcap & PCI_EXP_LNKCAP_LBNC) > + if (linkcap & PCI_EXP_LNKCAP_LBNC && > + hweight8(dev->supported_speeds) > 1) > services |= PCIE_PORT_SERVICE_BWCTRL; > } > I can confirm that this in combination with the two other patches fixes my problem. I'm still a little unsure if we want to go with a more minimal patch for v6.13-rc to take more time to figure out the correct handling in patch 1, but I think medium term this is the right overall approach. Either way, feel free to add: Tested-by: Niklas Schnelle <niks@xxxxxxxxxx>