Re: [PATCH v3 5/7] drivers: firmware: psci: Add hierarchical domain idle states converter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 5 Feb 2020 at 17:18, Sudeep Holla <sudeep.holla@xxxxxxx> wrote:
>
> On Wed, Feb 05, 2020 at 04:55:17PM +0100, Ulf Hansson wrote:
> > On Wed, 5 Feb 2020 at 15:06, Sudeep Holla <sudeep.holla@xxxxxxx> wrote:
> > >
> > > On Wed, Feb 05, 2020 at 05:53:00PM +0530, Maulik Shah wrote:
> > > >
> > > > On 2/4/2020 8:51 PM, Sudeep Holla wrote:
> > > > > On Tue, Feb 04, 2020 at 10:22:42AM +0530, Maulik Shah wrote:
> > > > > > On 2/3/2020 10:38 PM, Sudeep Holla wrote:
> > > > > > > On Mon, Feb 03, 2020 at 07:05:38PM +0530, Maulik Shah wrote:
> > > > > > > > From: Ulf Hansson <ulf.hansson@xxxxxxxxxx>
> > > > > > > >
> > > > > > > > If the hierarchical CPU topology is used, but the OS initiated mode isn't
> > > > > > > > supported, we need to rely solely on the regular cpuidle framework to
> > > > > > > > manage the idle state selection, rather than using genpd and its
> > > > > > > > governor.
> > > > > > > >
> > > > > > > > For this reason, introduce a new PSCI DT helper function,
> > > > > > > > psci_dt_pm_domains_parse_states(), which parses and converts the
> > > > > > > > hierarchically described domain idle states from DT, into regular flattened
> > > > > > > > cpuidle states. The converted states are added to the existing cpuidle
> > > > > > > > driver's array of idle states, which make them available for cpuidle.
> > > > > > > >
> > > > > > > And what's the main motivation for this if OSI is not supported in the
> > > > > > > firmware ?
> > > > > > Hi Sudeep,
> > > > > >
> > > > > > Main motivation is to do last-man activities before the CPU cluster can
> > > > > > enter a deep idle state.
> > > > > >
> > > > > Details on those last-man activities will help the discussion. Basically
> > > > > I am wondering what they are and why they need to done in OSPM ?
> > > >
> > > > Hi Sudeep,
> > > >
> > > > there are cases like,
> > > >
> > > > Last cpu going to deepest idle mode need to lower various resoruce
> > > > requirements (for eg DDR freq).
> > > >
> > >
> > > In PC mode, only PSCI implementation knows the last man and there shouldn't
> > > be any notion of it in OS. If you need it, you may need OSI. You are still
> > > mixing up the things. NACK for any such approach, sorry.
> >
> > Sudeep, I don't quite agree with your NACK to this. At least not yet. :-)
> >
>
> OK, I am not surprised :-)

Apologize for troubling you again. :-)

>
> > I do agree that the best suited solution seems to be OSI, as to
> > support this kind of SoC requirements.
> >
>
> That's the main point. We need to draw some line as what we want to do
> with PC and OSI mode. If we plan to take up all last man responsibility
> in the kernel, what's the point in not supporting OSI in the firmware
> then ? I can't buy it yet.
>
> > However, if for some reason the PC mode is being used, we could still
> > allow Linux to control "last-man activities" as it knows what each CPU
> > has voted for when going idle. Yes, the PSCI FW decides in the end,
> > but that doesn't really matter. Or is there another technical reason
> > to why you object?
> >
>
> Precisely, FW decides and let it. Just because we can do in the kernel
> doesn't mean we must do it. It's clear in the spec and doing it in the
> kernel will be sub-optimal if PSCI f/w aborted entering the deeper
> state that required some action in the first place.

Yes, it may be suboptimal for PC-mode.

On the other hand, we already fire CPU PM notifiers while exit/enter
idle states (except for WFI). Those may also be suboptimal for kind of
the similar reasons.

Maybe it's not the best argument, but it sounds like allowing us to
control cluster power on/off notifications for last-man activities,
would just conform to the similar behaviour we already have. No?

>
> > As a matter of fact, if we allow support for PC mode with
> > "last-man-activities", it would allow us to make a fair
> > performance/energy comparison between the two PSCI CPU suspend modes,
> > for the same SoC. I would be thrilled about looking into doing such
> > tests, I bet you are as well!?
> >
>
> I was, but not anymore, especially if we want such changes in the kernel
> to do so.
>
> Just use OSI as that was the point of adding all these after years of
> discussion claiming it's more optimal compared to PC. Now telling that
> you need more changes to compare it with PC just doesn't make any sense
> at all to me.

Fair enough.

I was just pondering over if there are other reasons to why we may want this.

One other thing that could be problematic to support, is when are
other resources, I/O controllers for example, sharing the same power
rail as a cluster. When such controller is in use, idle states of the
cluster must be prevented. Without using genpd to model the CPU
topology, it may be difficult to deal with this.

Of course, using PC mode when trying to deal with this
platform/board-requirement would also be suboptimal. In other words,
your argument about when using OSI vs PC mode, still stands.

Kind regards
Uffe



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux