We currently only enforce BW floors for a subset of nodes in a path. All BCMs that need updating are queued in the pre_aggregate/aggregate phase. The first set() commits all queued BCMs and subsequent set() calls short-circuit without committing anything. Since the floor BW isn't set in sum_avg/max_peak until set(), then some BCMs are committed before their associated nodes reflect the floor. Set the floor as each node is being aggregated. This ensures that all all relevant floors are set before the BCMs are committed. Fixes: 266cd33b5913 ("interconnect: qcom: Ensure that the floor bandwidth value is enforced") Signed-off-by: Mike Tipton <mdtipton@xxxxxxxxxxxxxx> --- drivers/interconnect/qcom/icc-rpmh.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c index bf01d09dba6c..f118f57eae37 100644 --- a/drivers/interconnect/qcom/icc-rpmh.c +++ b/drivers/interconnect/qcom/icc-rpmh.c @@ -57,6 +57,11 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, qn->sum_avg[i] += avg_bw; qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); } + + if (node->init_avg || node->init_peak) { + qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg); + qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak); + } } *agg_avg += avg_bw; @@ -90,11 +95,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst) qp = to_qcom_provider(node->provider); qn = node->data; - qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC], - node->avg_bw); - qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC], - node->peak_bw); - qcom_icc_bcm_voter_commit(qp->voter); return 0; -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project