On Wed, 18 Oct 2023 at 10:06, Stephan Gerhold <stephan.gerhold@xxxxxxxxxxxxxxx> wrote: > > The genpd core caches performance state votes from devices that are > runtime suspended as of commit 3c5a272202c2 ("PM: domains: Improve > runtime PM performance state handling"). They get applied once the > device becomes active again. > > To attach the power domains needed by qcom-cpufreq-nvmem the OPP core > calls genpd_dev_pm_attach_by_id(). This results in "virtual" dummy > devices that use runtime PM only to control the enable and performance > state for the attached power domain. > > However, at the moment nothing ever resumes the virtual devices created > for qcom-cpufreq-nvmem. They remain permanently runtime suspended. This > means that performance state votes made during cpufreq scaling get > always cached and never applied to the hardware. > > Fix this by enabling the devices after attaching them and use > dev_pm_syscore_device() to ensure the power domains also stay on when > going to suspend. Since it supplies the CPU we can never turn it off > from Linux. There are other mechanisms to turn it off when needed, > usually in the RPM firmware (RPMPD) or the cpuidle path (CPR genpd). I believe we discussed using dev_pm_syscore_device() for the previous version. It's not intended to be used for things like the above. Moreover, I was under the impression that it wasn't really needed. In fact, I would think that this actually breaks things for system suspend/resume, as in this case the cpr driver's genpd ->power_on|off() callbacks are no longer getting called due this, which means that the cpr state machine isn't going to be restored properly. Or did I get this wrong? Kind regards Uffe > > Without this fix performance states votes are silently ignored, and the > CPU/CPR voltage is never adjusted. This has been broken since 5.14 but > for some reason no one noticed this on QCS404 so far. > > Cc: stable@xxxxxxxxxxxxxxx > Fixes: 1cb8339ca225 ("cpufreq: qcom: Add support for qcs404 on nvmem driver") > Signed-off-by: Stephan Gerhold <stephan.gerhold@xxxxxxxxxxxxxxx> > --- > drivers/cpufreq/qcom-cpufreq-nvmem.c | 49 +++++++++++++++++++++++++++++++++--- > 1 file changed, 46 insertions(+), 3 deletions(-) > > diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c > index 82a244f3fa52..3794390089b0 100644 > --- a/drivers/cpufreq/qcom-cpufreq-nvmem.c > +++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c > @@ -25,6 +25,7 @@ > #include <linux/platform_device.h> > #include <linux/pm_domain.h> > #include <linux/pm_opp.h> > +#include <linux/pm_runtime.h> > #include <linux/slab.h> > #include <linux/soc/qcom/smem.h> > > @@ -47,6 +48,7 @@ struct qcom_cpufreq_match_data { > > struct qcom_cpufreq_drv_cpu { > int opp_token; > + struct device **virt_devs; > }; > > struct qcom_cpufreq_drv { > @@ -268,6 +270,18 @@ static const struct qcom_cpufreq_match_data match_data_ipq8074 = { > .get_version = qcom_cpufreq_ipq8074_name_version, > }; > > +static void qcom_cpufreq_put_virt_devs(struct qcom_cpufreq_drv *drv, unsigned cpu) > +{ > + const char * const *name = drv->data->genpd_names; > + int i; > + > + if (!drv->cpus[cpu].virt_devs) > + return; > + > + for (i = 0; *name; i++, name++) > + pm_runtime_put(drv->cpus[cpu].virt_devs[i]); > +} > + > static int qcom_cpufreq_probe(struct platform_device *pdev) > { > struct qcom_cpufreq_drv *drv; > @@ -321,6 +335,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev) > of_node_put(np); > > for_each_possible_cpu(cpu) { > + struct device **virt_devs = NULL; > struct dev_pm_opp_config config = { > .supported_hw = NULL, > }; > @@ -341,7 +356,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev) > > if (drv->data->genpd_names) { > config.genpd_names = drv->data->genpd_names; > - config.virt_devs = NULL; > + config.virt_devs = &virt_devs; > } > > if (config.supported_hw || config.genpd_names) { > @@ -352,6 +367,30 @@ static int qcom_cpufreq_probe(struct platform_device *pdev) > goto free_opp; > } > } > + > + if (virt_devs) { > + const char * const *name = config.genpd_names; > + int i, j; > + > + for (i = 0; *name; i++, name++) { > + ret = pm_runtime_resume_and_get(virt_devs[i]); > + if (ret) { > + dev_err(cpu_dev, "failed to resume %s: %d\n", > + *name, ret); > + > + /* Rollback previous PM runtime calls */ > + name = config.genpd_names; > + for (j = 0; *name && j < i; j++, name++) > + pm_runtime_put(virt_devs[j]); > + > + goto free_opp; > + } > + > + /* Keep CPU power domain always-on */ > + dev_pm_syscore_device(virt_devs[i], true); > + } > + drv->cpus[cpu].virt_devs = virt_devs; > + } > } > > cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1, > @@ -365,8 +404,10 @@ static int qcom_cpufreq_probe(struct platform_device *pdev) > dev_err(cpu_dev, "Failed to register platform device\n"); > > free_opp: > - for_each_possible_cpu(cpu) > + for_each_possible_cpu(cpu) { > + qcom_cpufreq_put_virt_devs(drv, cpu); > dev_pm_opp_clear_config(drv->cpus[cpu].opp_token); > + } > return ret; > } > > @@ -377,8 +418,10 @@ static void qcom_cpufreq_remove(struct platform_device *pdev) > > platform_device_unregister(cpufreq_dt_pdev); > > - for_each_possible_cpu(cpu) > + for_each_possible_cpu(cpu) { > + qcom_cpufreq_put_virt_devs(drv, cpu); > dev_pm_opp_clear_config(drv->cpus[cpu].opp_token); > + } > } > > static struct platform_driver qcom_cpufreq_driver = { > > -- > 2.39.2 >