3.16.78-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Liangyan <liangyan.peng@xxxxxxxxxxxxxxxxx> commit 5e2d2cc2588bd3307ce3937acbc2ed03c830a861 upstream. do_sched_cfs_period_timer() will refill cfs_b runtime and call distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs attached to this cfs_b can't get runtime and will be throttled. We find that one throttled cfs_rq has non-negative cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64 in snippet: distribute_cfs_runtime() { runtime = -cfs_rq->runtime_remaining + 1; } The runtime here will change to a large number and consume all cfs_b->runtime in this cfs_b period. According to Ben Segall, the throttled cfs_rq can have account_cfs_rq_runtime called on it because it is throttled before idle_balance, and the idle_balance calls update_rq_clock to add time that is accounted to the task. This commit prevents cfs_rq to be assgined new runtime if it has been throttled until that distribute_cfs_runtime is called. Signed-off-by: Liangyan <liangyan.peng@xxxxxxxxxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Reviewed-by: Valentin Schneider <valentin.schneider@xxxxxxx> Reviewed-by: Ben Segall <bsegall@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: shanpeic@xxxxxxxxxxxxxxxxx Cc: xlpang@xxxxxxxxxxxxxxxxx Fixes: d3d9dc330236 ("sched: Throttle entities exceeding their allowed bandwidth") Link: https://lkml.kernel.org/r/20190826121633.6538-1-liangyan.peng@xxxxxxxxxxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> [bwh: Backported to 3.16: Open-code SCHED_WARN_ON().] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> --- --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3271,6 +3271,8 @@ static void __account_cfs_rq_runtime(str if (likely(cfs_rq->runtime_remaining > 0)) return; + if (cfs_rq->throttled) + return; /* * if we're unable to extend our runtime we resched so that the active * hierarchy can be throttled @@ -3450,6 +3452,11 @@ static u64 distribute_cfs_runtime(struct if (!cfs_rq_throttled(cfs_rq)) goto next; + /* By the above check, this should never be true */ +#ifdef CONFIG_SCHED_DEBUG + WARN_ON_ONCE(cfs_rq->runtime_remaining > 0); +#endif + runtime = -cfs_rq->runtime_remaining + 1; if (runtime > remaining) runtime = remaining;