On Wed 05 Feb 06:06 PST 2020, Sudeep Holla wrote: > On Wed, Feb 05, 2020 at 05:53:00PM +0530, Maulik Shah wrote: > > > > On 2/4/2020 8:51 PM, Sudeep Holla wrote: > > > On Tue, Feb 04, 2020 at 10:22:42AM +0530, Maulik Shah wrote: > > > > On 2/3/2020 10:38 PM, Sudeep Holla wrote: > > > > > On Mon, Feb 03, 2020 at 07:05:38PM +0530, Maulik Shah wrote: > > > > > > From: Ulf Hansson <ulf.hansson@xxxxxxxxxx> > > > > > > > > > > > > If the hierarchical CPU topology is used, but the OS initiated mode isn't > > > > > > supported, we need to rely solely on the regular cpuidle framework to > > > > > > manage the idle state selection, rather than using genpd and its > > > > > > governor. > > > > > > > > > > > > For this reason, introduce a new PSCI DT helper function, > > > > > > psci_dt_pm_domains_parse_states(), which parses and converts the > > > > > > hierarchically described domain idle states from DT, into regular flattened > > > > > > cpuidle states. The converted states are added to the existing cpuidle > > > > > > driver's array of idle states, which make them available for cpuidle. > > > > > > > > > > > And what's the main motivation for this if OSI is not supported in the > > > > > firmware ? > > > > Hi Sudeep, > > > > > > > > Main motivation is to do last-man activities before the CPU cluster can > > > > enter a deep idle state. > > > > > > > Details on those last-man activities will help the discussion. Basically > > > I am wondering what they are and why they need to done in OSPM ? > > > > Hi Sudeep, > > > > there are cases like, > > > > Last cpu going to deepest idle mode need to lower various resoruce > > requirements (for eg DDR freq). > > > > In PC mode, only PSCI implementation knows the last man and there shouldn't > be any notion of it in OS. If you need it, you may need OSI. You are still > mixing up the things. NACK for any such approach, sorry. > Forgive me if I'm misunderstanding PSCI's role here, but doesn't it deal with the power management of the "processor subsystem" in the SoC? In the Qualcomm platforms most resources (voltage rails, clocks, etc) are controlled through a power controller that provides controls for a state when the CPU subsystem is running and when it's asleep. This allows non-CPU-related device to control if resources that are shared with the CPU subsystem should be kept on when the last CPU/cluster goes down. An example of this would be the display controller voting to keep a voltage rail on after the CPU subsystem collapses, because the display is still on. But as long as the CPU subsystem is running it will keep these resources available and there's no need to change these votes (e.g. if the display is turned on and off while the CPU is active the sleep-requests cancels out), so they are simply cached/batched up in the RPMh driver and what Maulik's series is attempting to do is to flush the cached values when Linux believes that the firmware might decide to enter a lower power state. Regards, Bjorn