On Mon, 18 Nov 2024, Jonathan Cameron wrote: > On Tue, 12 Nov 2024 18:01:50 +0200 (EET) > Ilpo Järvinen <ilpo.jarvinen@xxxxxxxxxxxxxxx> wrote: > > > On Tue, 12 Nov 2024, Lukas Wunner wrote: > > > > > On Fri, Oct 18, 2024 at 05:47:53PM +0300, Ilpo Järvinen wrote: > > > > +EXPORT_SYMBOL_GPL(pcie_set_target_speed); > > > > > > My apologies for another belated comment on this series. > > > This patch is now a688ab21eb72 on pci/bwctrl: > > > > > > I note that pcie_set_target_speed() is not called my a modular user > > > (CONFIG_PCIE_THERMAL is bool, not tristate), so the above-quoted export > > > isn't really necessary right now. I don't know if it was added > > > intentionally because some modular user is expected to show up > > > in the near future. > > > > Its probably a thinko to add it at all but then there have been talk about > > other users interested in the API too so it's not far fetched we could see > > a user. No idea about timelines though. > > > > There are some AMD GPU drivers tweaking the TLS field on their own but > > they also touch some HW specific registers (although, IIRC, they only > > touch Endpoint'sTLS). I was thinking of converting them but I'm unsure if > > that yields something very straightforward and ends up producing a working > > conversion or not (without ability to test with the HW). But TBH, not on > > my highest priority item. > > > > > > @@ -135,6 +296,7 @@ static int pcie_bwnotif_probe(struct pcie_device *srv) > > > > if (!data) > > > > return -ENOMEM; > > > > > > > > + devm_mutex_init(&srv->device, &data->set_speed_mutex); > > > > ret = devm_request_threaded_irq(&srv->device, srv->irq, NULL, > > > > pcie_bwnotif_irq_thread, > > > > IRQF_SHARED | IRQF_ONESHOT, > > > > > > We generally try to avoid devm_*() functions in port service drivers > > > because if we later on move them into the PCI core (which is the plan), > > > we'll have to unroll them. Not the end of the world that they're used > > > here, just not ideal. > > > > I think Jonathan disagrees with you on that: > > > > https://lore.kernel.org/linux-pci/20241017114812.00005e67@xxxxxxxxxx/ > > Indeed - you beat me to it ;) > > There is no practical way to move most of the port driver code into the PCI > core and definitely not interrupts. It is a shame though as I'd much prefer > if we could do so. At LPC other issues some as power management were called > out as being very hard to handle, but to me the interrupt bit is a single > relatively easy to understand blocker. > > I've been very slow on getting back to this, but my current plan would > > 1) Split up the bits of portdrv subdrivers that are actually core code > (things like the AER counters etc) The current mix in the same files > makes it hard to reason about lifetimes etc. > > 2) Squash all the portdrv sub drivers into simple library style calls so > no pretend devices, everything registered directly. That cleans up > a lot of the layering and still provides reusable code if it makes > sense to have multiple drivers for ports or to reuse this code for > something else. Note that along the way I'll check we can build the > portdrv as a module - I don't plan to actually do that, but it shows > the layering / abstractions all work if it is possible. That will > probably make use of devm_ where appropriate as it simplifies a lot > of paths. I'm sorry to be a bit of a spoilsport here but quirks make calls to ASPM code and now also to bwctrl set Link Speed API so I'm not entire sure if you can actual succeed in that module test. (It's just something that again indicates both would belong to PCI core but sadly that direction is out of options). -- i. > 3) Add the MSIX stuff to support 'new' child drivers but only where > dynamic MSIX is supported. > > 4) Register new child devices where the layering does make sense. So > where we are actually binding drivers that can be unbound etc. > Only for cases where dynamic MSI-X is available. > > I hope to get back to this in a week or so. > > Jonathan > > > > > :-) > > >