On Wed, May 25, 2022 at 04:57:39PM -0500, Bjorn Helgaas wrote: > On Tue, May 24, 2022 at 12:54:48PM -0400, Jim Quinlan wrote: > > When brcm_pcie_add_bus() is invoked, we will "get" and enable any > > regulators that are present in the DT node. If the busno==1, we will > > will also attempt pcie-linkup. If PCIe linkup fails, which can happen for > > multiple reasons but most due to a missing device, we turn > > on "refusal" mode to prevent our unforgiving PCIe HW from causing an > > abort on any subsequent PCIe config-space accesses. > > > Further, a failed linkup will have brcm_pcie_probe() stopping and > > removing the root bus, which in turn invokes brcm_pcie_remove_bus() > > (actually named pci_subdev_regulators_remove_bus() as it may someday > > find its way into bus.c), which invokes regulator_bulk_disable() on > > any regulators that were enabled by the probe. > > Ah, thanks! This is the detail I missed. If pci_host_probe() > succeeds and the link is down, we call brcm_pcie_remove() (the > driver's .remove() method). That's unusual and possibly unique among > native host bridge drivers. I'm not sure that's the best pattern > here. Most drivers can't do that because they expect multiple devices > on the root bus. And the Root Port is still a functional device on > its own, even if its link is down. Users likely expect to see it in > lspci and manipulate it via setpci. It may have AER logs with clues > about why the link didn't come up. > > Again something for future discussion, not for this regression. I experienced the same end result, root port not available unless the link is up during probe, with the imx6 PCI driver and I'm also not convinced this is the best decision. I guess one of the reasons for this behavior is to save some power, but it should be possible to just disable the PCIe root port in the device tree to handle the use case in which PCIe port is not available at all on the system. Francesco