On Fri, Jun 16, 2023 at 01:27:52PM +0100, Maciej W. Rozycki wrote: > On Thu, 15 Jun 2023, Bjorn Helgaas wrote: > As per my earlier remark: > > > I think making a system halfway-fixed would make little sense, but with > > the actual fix actually made last as you suggested I think this can be > > split off, because it'll make no functional change by itself. > > I am not perfectly happy with your rearrangement to fold the !PCI_QUIRKS > stub into the change carrying the actual workaround and then have the > reset path update with a follow-up change only, but I won't fight over it. > It's only one tree revision that will be in this halfway-fixed state and > I'll trust your judgement here. Thanks for raising this. Here's my thought process: 12 PCI: Provide stub failed link recovery for device probing and hot plug 13 PCI: Add failed link recovery for device reset events 14 PCI: Work around PCIe link training failures Patch 12 [1] adds calls to pcie_failed_link_retrain(), which does nothing and returns false. Functionally, it's a no-op, but the structure is important later. Patch 13 [2] claims to request failed link recovery after resets, but actually doesn't do anything yet because pcie_failed_link_retrain() is still a no-op, so this was a bit confusing. Patch 14 [3] implements pcie_failed_link_retrain(), so the recovery mentioned in 12 and 13 actually happens. But this patch doesn't add the call to pcie_failed_link_retrain(), so it's a little bit hard to connect the dots. I agree that as I rearranged it, the workaround doesn't apply in all cases simultaneously. Maybe not ideal, but maybe not terrible either. Looking at it again, maybe it would have made more sense to move the pcie_wait_for_link_delay() change to the last patch along with the pci_dev_wait() change. I dunno. Bjorn [1] 12 https://lore.kernel.org/r/alpine.DEB.2.21.2306111619570.64925@xxxxxxxxxxxxxxxxx [2] 13 https://lore.kernel.org/r/alpine.DEB.2.21.2306111631050.64925@xxxxxxxxxxxxxxxxx [3] 14 https://lore.kernel.org/r/alpine.DEB.2.21.2305310038540.59226@xxxxxxxxxxxxxxxxx