On Tue, Mar 13, 2012 at 4:52 PM, Kevin Hilman <khilman@xxxxxx> wrote: > Hi Colin, > > On 12/21/2011 01:09 AM, Colin Cross wrote: > >> To use coupled cpuidle states, a cpuidle driver must: > > [...] > >> Provide a struct cpuidle_state.enter function for each state >> that affects multiple cpus. This function is guaranteed to be >> called on all cpus at approximately the same time. The driver >> should ensure that the cpus all abort together if any cpu tries >> to abort once the function is called. > > I've discoved the last sentence above is crucial, and in order to catch > all the corner cases I found it useful to have the struct > cpuidle_coupled in cpuidle.h so that the driver can check ready_count > itself (patch below, on top of $SUBJECT series.) ready_count is an internal state of core coupled code, and will change significantly in the next version of the patches. Drivers cannot depend on it. > As you know, on OMAP4, when entering the coupled state, CPU0 has to wait > for CPU1 to enter its low power state before entering itself. The first > pass at implementing this was to just spin waiting for the powerdomain > of CPU1 to hit off. That works... most of the time. > > If CPU1 wakes up immediately (or before CPU0 starts checking), or more > likely, fails to hit the low-power state because of other hardware > "conditions", CPU0 will end up stuck in the loop waiting for CPU1. > > To solve this, in addition to checking the power state of CPU1, I also > check if (coupled->ready_count != cpumask_weight(&coupled->alive_coupled_cpus)). > If true, it means that CPU1 has already exited/aborted so CPU0 had > better abort as well. > > Checking the ready_count seemed like an easy way to do this, but did you > have any other mechanisms in mind for CPUs to communicate that they've > exited/aborted? Why not set a flag from CPU1 when it exits the low power state, and have CPU0 spin on the powerdomain register or the flag? You can then use the parallel barrier function to ensure both cpus have seen the flag and reset it to 0 before returning. _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/linux-pm