Re: [RFC PATCH 7/9] clk: apple: Add clk-apple-cluster driver to manage CPU p-states

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/10/2021 07.07, Stephen Boyd wrote:
This looks bad from a locking perspective. How is lockdep holding up
with this driver? We're underneath the prepare lock here and we're
setting a couple level registers which is all good but now we're calling
into genpd code and who knows what's going to happen locking wise.

It seems this is all going away given the other discussion threads point towards handling this directly via OPP in the cpufreq-dt driver. I'll run whatever I end up with for v2 through lockdep though, good call!

I don't actually see anything in here that indicates this is supposed to
be a clk provider. Is it being modeled as a clk so that it can use
cpufreq-dt? If it was a clk provider I'd expect it to be looking at
parent clk rates, and reading hardware to calculate frequencies based on
dividers and multipliers, etc. None of that is happening here.

Why not write a cpufreq driver, similar to qcom-cpufreq-hw.c that looks
through the OPP table and then writes the value into the pstate
registers? The registers in here look awfully similar to the qcom
hardware. I don't know what the DESIRED1 and DESIRED2 registers are for
though. Maybe they're so that one or the other frequency can be used if
available? Like a min/max?

Either way, writing this as a cpufreq driver avoids the clk framework
entirely which is super great for me :) It also avoids locking headaches
from the clk prepare lock, and it also lets you support lockless cpufreq
transitions by implementing the fast_switch function. I don't see any
downsides to the cpufreq driver approach.

I wasn't too sure about this approach; I thought using a clk provider would end up simplifying things since I could use the cpufreq-dt machinery to take care of all the OPP stuff, and a lot of SoCs seemed to be going that way, but it seems cpufreq might be a better approach for this SoC?

There can only be one cpufreq driver instance, while I used two clock controllers to model the two clusters. So in the cpufreq case, the driver itself would have to deal with all potential CPU cluster instances/combinations itself. Not sure how much more code that will be, hopefully not too much...

I see qcom-cpufreq-hw uses a qcom,freq-domain prop to link CPUs to the cpufreq domains. cpufreq-dt and vexpress-spc-cpufreq instead use dev_pm_opp_get_sharing_cpus to look for shared OPP tables. Is there a reason not to do it that way and avoid the vendor prop? I guess the prop is more explicit while the sharing approach would have an implicit order dependency (i.e. CPUs are always grouped by cluster and clusters are listed in /cpus in the same order as in the cpufreq node)...

(Ack on the other comments, but if this becomes a cpufreq driver most of it is going to end up rewritten... :))

For the cpufreq case, do you have any suggestions as to how to relate it to the memory controller configuration tweaks? Ideally this would go through the OPP tables so it can be customized for future SoCs without stuff hardcoded in the driver... it seems the configuration affects power saving behavior / latencies, so it doesn't quite match the interconnect framework bandwidth request stuff. I'm also not sure how this would affect fast_switch, since going through those frameworks might imply locks... we might even find ourselves with a situation in the near future where multiple cpufreq policies can request memory controller latency reduction independently; I can come up with how to do this locklessly using atomics, but I can't imagine that being workable with higher-level frameworks, it would have to be a vendor-specific mechanism at that point...

--
Hector Martin (marcan@xxxxxxxxx)
Public Key: https://mrcn.st/pub



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux