On Sun, Apr 30, 2023 at 05:24:26PM -0400, Jim Quinlan wrote: > On Sun, Apr 30, 2023 at 3:13 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > On Fri, Apr 28, 2023 at 06:34:57PM -0400, Jim Quinlan wrote: > > > Since the STB PCIe HW will cause a CPU abort on a PCIe transaction > > > completion timeout abort, we might as well extend the default timeout > > > limit. Further, different devices and systems may requires a larger or > > > smaller amount commensurate with their L1SS exit time, so the property > > > "brcm,completion-timeout-us" may be used to set a custom timeout value. > > > > s/requires/require/ > > > > AFAIK, other platforms do not tweak Configuration Timeout values based > > on L1SS exit time. Why is brcm different? > > Keep in mind that our Brcm PCIe HW signals a CPU abort on a PCIe > completion timeout. Other PCIe HW just returns 0xffffffff. Most does, but I'm pretty sure there are other controllers used on arm64 that signal CPU aborts, e.g., imx6q_pcie_abort_handler() seems similar. > I've been maintaining this driver for over eight years or so and we've > done fine with the HW default completion timeout value. > Only recently has a major customer requested that this timeout value > be changed, and their reason was so they could > avoid a CPU abort when using L1SS. > > Now we could set this value to a big number for all cases and not > require "brcm,completion-timeout-us". I cannot see any > downsides, other than another customer coming along asking us to > double the default or lessen it. > > But I'm certainly willing to do that -- would that be acceptable? That would be fine with me. Bjorn