On 26 September 2013 00:07, Rafael J. Wysocki <rjw@xxxxxxxxxxxxx> wrote: > So the problem is real, but the fix seems to be of a "quick and dirty" kind. > > First of all, it looks like we need a clear "begin transition" call that > I suppose drivers should execute from their .target() methods once they have > decided to do a transition. That would increment the "ongoing" counter etc. > > Second, we need a corresponding "end transition" call that would be executed > whenever appropriate from the driver's perspective. > > Clearly, these two things should be independent of the notifiers and the > notifications should only be done between "begin transition" and "end > transition" and only by whoever called the "begin transition" to start with. > > Now, question is what should happen if "begin transition" is called when > the previous transition hasn't been completed yet, should it block or should > it fail? There seem to be arguments for both, but I suppose blocking would be > easier to implement. I got another solution which is much simpler, and I really can't believe (psychologically) that it will solve all the problems we were talking about.. Please see if there is any loophole in there.. diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index ec391d7..e4ed89a 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -340,6 +340,13 @@ static void __cpufreq_notify_transition(struct cpufreq_policy *policy, } } +void cpufreq_notify_transition_each_cpu(struct cpufreq_policy *policy, + struct cpufreq_freqs *freqs, unsigned int state) +{ + for_each_cpu(freqs->cpu, policy->cpus) + __cpufreq_notify_transition(policy, freqs, state); +} + /** * cpufreq_notify_transition - call notifier chain and adjust_jiffies * on frequency transition. @@ -351,8 +358,34 @@ static void __cpufreq_notify_transition(struct cpufreq_policy *policy, void cpufreq_notify_transition(struct cpufreq_policy *policy, struct cpufreq_freqs *freqs, unsigned int state) { - for_each_cpu(freqs->cpu, policy->cpus) - __cpufreq_notify_transition(policy, freqs, state); + if ((state != CPUFREQ_PRECHANGE) && (state != CPUFREQ_POSTCHANGE)) + return cpufreq_notify_transition_each_cpu(policy, freqs, state); + + /* Serialize pre-post notifications */ + mutex_lock(&policy->transition_lock); + if (unlikely(WARN_ON(!policy->transition_ongoing && + (state == CPUFREQ_POSTCHANGE)))) { + mutex_unlock(&policy->transition_lock); + return; + } + + if (state == CPUFREQ_PRECHANGE) { + while (policy->transition_ongoing) { + mutex_unlock(&policy->transition_lock); + cpu_relax(); + mutex_lock(&policy->transition_lock); + } + + policy->transition_ongoing = true; + mutex_unlock(&policy->transition_lock); + } + + cpufreq_notify_transition_each_cpu(policy, freqs, state); + + if (state == CPUFREQ_POSTCHANGE) { + policy->transition_ongoing = false; + mutex_unlock(&policy->transition_lock); + } } EXPORT_SYMBOL_GPL(cpufreq_notify_transition); @@ -950,6 +983,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(void) goto err_free_cpumask; INIT_LIST_HEAD(&policy->policy_list); + mutex_init(&policy->transition_lock); + return policy; err_free_cpumask: diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 0aba2a6c..bb76909 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -85,6 +85,8 @@ struct cpufreq_policy { struct list_head policy_list; struct kobject kobj; struct completion kobj_unregister; + bool transition_ongoing; /* Tracks transition status */ + struct mutex transition_lock; }; -- To unsubscribe from this list: send the line "unsubscribe cpufreq" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html