On Tuesday, November 26, 2013 09:23:15 PM Rafael J. Wysocki wrote: > On Tuesday, November 26, 2013 07:56:19 AM Viresh Kumar wrote: > > On 26 November 2013 04:59, Rafael J. Wysocki <rjw@xxxxxxxxxxxxx> wrote: > > >> @@ -1259,6 +1262,8 @@ int dpm_suspend(pm_message_t state) > > >> > > >> might_sleep(); > > >> > > >> + cpufreq_suspend(); > > >> + > > >> > > >> mutex_lock(&dpm_list_mtx); > > >> pm_transition = state; > > >> async_error = 0; > > > > > > Shouldn't it do cpufreq_resume() on errors? > > > > Yes and this is already done I believe. In case dpm_suspend() fails, > > dpm_resume() gets called. Isn't it? > > OK > > > >> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > > >> +void cpufreq_suspend(void) > > >> +{ > > >> + struct cpufreq_policy *policy; > > >> + > > >> + if (!has_target()) > > >> + return; > > >> + > > >> + pr_debug("%s: Suspending Governors\n", __func__); > > >> + > > >> + list_for_each_entry(policy, &cpufreq_policy_list, policy_list) > > >> + if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP)) > > >> + pr_err("%s: Failed to stop governor for policy: %p\n", > > >> + __func__, policy); > > > > > > This appears to be racy. Is it really racy, or just seemingly? > > > > Why does it look racy to you? Userspace should be frozen by now, > > policy_list should be stable as well as nobody would touch it. > > You're stopping governors while they may be in use in principle. Do we have > suitable synchronization in place for that? Anyway, if you did what I asked you to do and put the cpufreq suspend/resume into dpm_suspend/resume_noirq(), I'd probably take this for 3.13. However, since you've decided to put those things somewhere else thus making the change much more intrusive, I can only queue it up for 3.14. This means I'm going to take the Tianyu's patch as a stop gap for 3.13. Thanks! -- I speak only for myself. Rafael J. Wysocki, Intel Open Source Technology Center. -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html