On Thu, May 26, 2022 at 03:53:55PM -0500, Bjorn Helgaas wrote: > On Thu, May 26, 2022 at 02:25:12PM -0500, Rob Herring wrote: > > On Mon, May 23, 2022 at 05:10:36PM -0500, Bjorn Helgaas wrote: > > > On Sat, May 21, 2022 at 02:51:42PM -0400, Jim Quinlan wrote: > > > > On Sat, May 21, > > > > 2CONFIG_INITRAMFS_SOURCE="/work3/jq921458/cpio/54-arm64-rootfs.cpio022 > > > > at 12:43 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > > On Wed, May 18, 2022 at 03:42:11PM -0400, Jim Quinlan wrote: > > > > > > I added Rafael because this seems vaguely similar to runtime power > > > > > management, and if we can integrate with that somehow, I'd sure like > > > > > to avoid building a parallel infrastructure for it. > > > > > > > > > > The current path we're on is to move some of this code that's > > > > > currently in pcie-brcmstb.c to the PCIe portdrv [0]. I'm a little > > > > > hesitant about that because ACPI does just fine without it. If we're > > > > > adding new DT functionality that could not be implemented via ACPI, > > > > > that's one thing. But I'm not convinced this is that new. > > > > > > > > AFAICT, Broadcom STB and Cable Modem products do not have/use/want > > > > ACPI. We are fine with keeping this "PCIe regulator" feature > > > > private to our driver and giving you speedy and full support in > > > > maintaining it. > > > > > > I don't mean that you should use ACPI, only that ACPI platforms can do > > > this sort of power control using the existing PCI core infrastructure, > > > and maybe there's a way for OF/DT platforms to hook into that same > > > infrastructure to minimize the driver-specific work. E.g., maybe > > > there's a way to extend platform_pci_set_power_state() and similar to > > > manage these regulators. > > > > The big difference is ACPI abstracts how to control power for a device. > > The OS just knows D0, D3, etc. states. For DT, there is no such > > abstraction. You need device specific code to do device specific power > > management. > > I'm thinking about the PCI side of the host controller, which should > live by the PCI rules. There are device-specific ways to control > power, clocks, resets, etc on the PCI side, but drivers for PCI > devices (as opposed to drivers for the host controllers) can't really > call that code directly. Yes, there are PCI specific ways to handle some of this when it is signals or power for standard PCI slots. But then it's also possible that you have a soldered down device that has extra or different interfaces. When this Broadcom thread was reviewed originally, I was the one pushing this towards doing this in the portdrv. That seems like the more logical place at least to control the root port state even if we need host controller specific routines to do the work. It's all related to how do we separate out host bridge and root port operations. > There are some exceptions, but generally speaking I don't think PCI > drivers that use generic power management need to use PCI_D0, > PCI_D3hot, etc directly. Generic PM uses interfaces like > pci_pm_suspend() that keep most of the PCI details in the PCI core > instead of the endpoint driver, e.g., [3]. Yeah, I think that's a different issue. > The PCI core has a bunch of interfaces: > > platform_pci_power_manageable() > platform_pci_set_power_state() > platform_pci_get_power_state() > platform_pci_choose_state() > > that currently mostly use ACPI. So I'm wondering whether there's some > way to extend those platform_*() interfaces to call the native host > controller device-specific power control code via an ops structure. > > Otherwise it feels like the native host controller drivers are in a > different world than the generic PM world, and we'll end up with every > host controller driver reimplementing things. > > For example, how would we runtime suspend a Root Port and turn off > power for PCI devices below it? Obviously that requires > device-specific code to control the power. Do we have some common > interface to it, or do we have to trap config writes to PCI_PM_CTRL or > something? Shrug. Honestly, the PCI specific power management stuff is not something I've studied. I'm a bit more fluent runtime PM. Somewhat related to all this is this thread[4] where I've suggested that the right way to save power when there's no link (or child device really) is using runtime PM rather than just failing probe. We also don't need each host controller doing their own conformance test hacks either. > [3] https://git.kernel.org/linus/cd97b7e0d780 > > > > > > [0] https://lore.kernel.org/r/20211110221456.11977-6-jim2101024@xxxxxxxxx > > > > > > IIUC, this path: > > > > > > > > > > pci_alloc_child_bus > > > > > brcm_pcie_add_bus # .add_bus method > > > > > pci_subdev_regulators_add_bus # in pcie-brcmstb.c for now > > > > > alloc_subdev_regulators # in pcie-brcmstb.c for now > > > > > regulator_bulk_get > > > > > regulator_bulk_enable > > > > > brcm_pcie_linkup # bring link up > > > > > > > > > > is basically so we can leave power to downstream devices off, then > > > > > turn it on when we're ready to enumerate those downstream devices. > > > > > > > > Yes -- it is the "chicken-and-egg" problem. Ideally, we would like > > > > for the endpoint driver to turn on its own regulators, but even to > > > > know which endpoint driver to probe we must turn on the regulator to > > > > establish linkup. > > > > > > I don't think having an endpoint driver turn on power to its device is > > > the right goal. > > > > DT requires device specific code to control a specific device. That > > belongs in the driver for that device. > > I must be talking about something different than you are. I see that > brcmstb has device-specific code to control the brcmstb device as well > as power for PCI devices downstream from that device. > > When I read "endpoint driver" I think of a PCIe Endpoint device like a > NIC. That's just a random PCI device, and I read "endpoint driver to > turn on its own regulators" as suggesting that the NIC driver (e1000, > etc) would turn on power to the NIC. Is that the intent? Yes! A NIC as an add-in card doesn't need anything because it's just a standard PCI connector with standard power sequencing. But take that same NIC chip and solder it down on a board. Then the board designers start cost saving and remove components. For example, there's no need for standard PCI supply to chip supply regulators (e.g. 12V/3.3V to whatever the chip has). Who needs an EEPROM with a MAC address either. I think there's roughly 2 cases we're dealing with. The platform specific ways to do power control on standard PCIe slots/connectors, and then non-standard connections that need downstream device specific ways to do power management (including powering on to be discovered). The line is blurred a bit because the latter case needs some of the former case (at least any in-band PCI power management). The problem I see all the time (not just PCI) is people trying to implement something generic/common rather than device specific which then makes its way into bindings. The only way something generic works is if there's a spec behind it. For PCI slots there is, but it is important we distinguish the 2 cases. Rob [4] https://lore.kernel.org/linux-pci/YksDJfterGl9uPjs@xxxxxxxxxxxxxxxxxx/