Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote:
> 
> 
> On 2/29/24 09:59, Lukasz Luba wrote:
> > 
> > 
> > On 2/28/24 17:00, Sibi Sankar wrote:
> > > 
> > > 
> > > On 2/28/24 18:54, Lukasz Luba wrote:
> > > > 
> > > > 
> > > > On 2/27/24 18:16, Sibi Sankar wrote:
> > > > > Register for limit change notifications if supported and use
> > > > > the throttled
> > > > > frequency from the notification to apply HW pressure.
> > > 
> > > Lukasz,
> > > 
> > > Thanks for taking time to review the series!
> > > 
> > > > > 
> > > > > Signed-off-by: Sibi Sankar <quic_sibis@xxxxxxxxxxx>
> > > > > ---
> > > > > 
> > > > > v3:
> > > > > * Sanitize range_max received from the notifier. [Pierre]
> > > > > * Update commit message.
> > > > > 
> > > > >   drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++-
> > > > >   1 file changed, 28 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/drivers/cpufreq/scmi-cpufreq.c
> > > > > b/drivers/cpufreq/scmi-cpufreq.c
> > > > > index 76a0ddbd9d24..78b87b72962d 100644
> > > > > --- a/drivers/cpufreq/scmi-cpufreq.c
> > > > > +++ b/drivers/cpufreq/scmi-cpufreq.c
> > > > > @@ -25,9 +25,13 @@ struct scmi_data {
> > > > >       int domain_id;
> > > > >       int nr_opp;
> > > > >       struct device *cpu_dev;
> > > > > +    struct cpufreq_policy *policy;
> > > > >       cpumask_var_t opp_shared_cpus;
> > > > > +    struct notifier_block limit_notify_nb;
> > > > >   };
> > > > > +const struct scmi_handle *handle;
> > 
> > I've missed this bit here.
> 
> So for this change we actually have to ask Cristian or Sudeep
> because I'm not sure if we have only one 'handle' instance
> for all cpufreq devices.
> 
> If we have different 'handle' we cannot move it to the
> global single pointer.
> 
> Sudeep, Cristian what do you think?

I was just replying noticing this :D .... since SCMI drivers can be
probed multiple times IF you defined multiple scmi top nodes in your DT
containing the same protocol nodes, they receive a distinct sdev/handle/ph
for each probe...so any attempt to globalize these wont work...BUT...

...this is a bit of a weird setup BUT it is not against the spec and it can
be used to parallelize more the SCMI accesses to disjont set of resources
within the same protocol (a long story here...) AND this type of setup is
something that it is already used by some other colleagues of Sibi working
on a different line of products (AFAIK)...

So, for these reasons, usually, all the other SCMI drivers have per-instance
non-global references to handle/sdev/ph....

...having said that, thought, looking at the structure of CPUFReq
drivers, I am not sure that they can stand such a similar setup
where multiple instances of this same driver are probed

.... indeed the existent *ph refs above is already global....so it wont already
work anyway in case of multiple instances now...

...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate()
to be annd how it is implemented now using the global *ph reference, it is
clearly already not working cleanly on a multi-instance setup...

...now...I can imagine how to (maybe) fix the above removing the globals and
fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in
this multi-probed mode of operation ?
Does it even make sense to be able to support this in CPUFREQ ?

(as an example in cpufreq,c there is static global cpufreq_driver
 pointing to the arch-specific configured driver BUT that also holds
 some .driver_data AND that cleraly wont be instance specific if you
 probe multiple times and register with CPUFreq multiple times...)

 More questions than answers here :D

 Thanks,
 Cristian




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux