Hi, On Mon, May 4, 2020 at 10:50 AM Douglas Anderson <dianders@xxxxxxxxxxxx> wrote: > > Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER, > CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have > a default. This means that we'll try to do a flush in those cases but > we won't necessarily be the last CPU down. That's not so ideal since > our (lack of) locking assumes we're on the last CPU. > > Luckily this isn't as big a problem as you'd think since (at least on > the SoC I tested) we don't get these notifications except on full > system suspend. ...and on full system suspend we get them on the last > CPU down. That means that the worst problem we hit is flushing twice. > Still, it's good to make it correct. > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > Reported-by: Stephen Boyd <swboyd@xxxxxxxxxxxx> > Signed-off-by: Douglas Anderson <dianders@xxxxxxxxxxxx> > --- > > Changes in v6: > - Release the lock on cluster notifications. > > Changes in v5: > - Corrently => Correctly > > Changes in v4: > - ("...Corrently ignore CPU_CLUSTER_PM notifications") split out for v4. > > Changes in v3: None > Changes in v2: None > > drivers/soc/qcom/rpmh-rsc.c | 3 +++ > 1 file changed, 3 insertions(+) The bugfixes in this series seem somewhat important to land. Is there something delaying them? Are we waiting for some tags from Maulik? -Doug