On 3/6/2020 3:50 AM, Doug Anderson wrote: > Hi, > > On Thu, Mar 5, 2020 at 9:07 AM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote: >> TCSes have previously programmed data when rpmh_flush is called. >> This can cause old data to trigger along with newly flushed. >> >> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed. >> >> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests") >> Signed-off-by: Maulik Shah <mkshah@xxxxxxxxxxxxxx> >> --- >> drivers/soc/qcom/rpmh.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c >> index 1951f6a..63364ce 100644 >> --- a/drivers/soc/qcom/rpmh.c >> +++ b/drivers/soc/qcom/rpmh.c >> @@ -472,6 +472,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) >> return 0; >> } >> >> + /* Invalidate the TCSes first to avoid stale data */ >> + do { >> + ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); >> + } while (ret == -EAGAIN); >> + >> /* First flush the cached batch requests */ >> ret = flush_batch(ctrlr); >> if (ret) > I think you should make this patch 3/4 instead of 4/4, and then: > > 1. In this patch remove the call to rpmh_rsc_invalidate() in > rpmh_invalidate(). You've already marked things "dirty" in > invalidate_batch() so no need to actually program the hardware--it'll > happen in the flush. Done. > > 2. In patch 4/4 (the flushing patch) add a call to rpmh_flush() to > rpmh_invalidate() if you're in non-OSI mode. Presumably you'll need a > spinlock around the rpmh_flush() call? With (1) addressed and rpmh_start_transaction and rpmh_end_transaction introduced in v13, this is not required. Thanks, Maulik > > > The end result of that will be that rpmh_invalidate() will properly > leave the non-batch sleep/wake sets programmed. > > > -Doug -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation