On Fri, Jul 23, 2021 at 09:38:44AM -0400 Phil Auld wrote: > On Fri, Jul 09, 2021 at 01:38:20PM +0200 Peter Zijlstra wrote: > > On Thu, Jul 08, 2021 at 09:25:45AM -0400, Phil Auld wrote: > > > Hi Peter, > > > > > > On Thu, Jul 08, 2021 at 09:26:26AM +0200 Peter Zijlstra wrote: > > > > On Wed, Jul 07, 2021 at 03:04:57PM -0400, Phil Auld wrote: > > > > > On systems with weaker memory ordering (e.g. power) commit dbfb089d360b > > > > > ("sched: Fix loadavg accounting race") causes increasing values of load > > > > > average (via rq->calc_load_active and calc_load_tasks) due to the wakeup > > > > > CPU not always seeing the write to task->sched_contributes_to_load in > > > > > __schedule(). Missing that we fail to decrement nr_uninterruptible when > > > > > waking up a task which incremented nr_uninterruptible when it slept. > > > > > > > > > > The rq->lock serialization is insufficient across different rq->locks. > > > > > > > > > > Add smp_wmb() to schedule and smp_rmb() before the read in > > > > > ttwu_do_activate(). > > > > > > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > > > index 4ca80df205ce..ced7074716eb 100644 > > > > > --- a/kernel/sched/core.c > > > > > +++ b/kernel/sched/core.c > > > > > @@ -2992,6 +2992,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, > > > > > > > > > > lockdep_assert_held(&rq->lock); > > > > > > > > > > + /* Pairs with smp_wmb in __schedule() */ > > > > > + smp_rmb(); > > > > > if (p->sched_contributes_to_load) > > > > > rq->nr_uninterruptible--; > > > > > > > > > > > > > Is this really needed ?! (this question is a big fat clue the comment is > > > > insufficient). AFAICT try_to_wake_up() has a LOAD-ACQUIRE on p->on_rq > > > > and hence the p->sched_contributed_to_load must already happen after. > > > > > > > > > > Yes, it is needed. We've got idle power systems with load average of 530.21. > > > Calc_load_tasks is 530, and the sum of both nr_uninterruptible and > > > calc_load_active across all the runqueues is 530. Basically monotonically > > > non-decreasing load average. With the patch this no longer happens. > > > > Have you tried without the rmb here? Do you really need both barriers? > > > > You're right here. (I see now that you were asking about the rmb specifically > in the first question) The rmb is not needed. > > I was unable to reproducde it with the upstream kernel. I still think it is > a problem though since the code in question is all the same. The recent > changes to unbound workqueues which make it more likely to run on the > submitting cpu may be masking the problem since it obviously requires > multiple cpus to hit. > > If I can isolate those changes I can try to revert them in upstream and > see if I can get it there. > > I suppose pulling those changes back could get us past this but > I'm not a fan of just hiding it by making it harder to hit. > > I've not gotten to the disable TTWU_QUEUE test, that's next... > ETOOMANYTREES Sorry for the noise. I was using the wrong tree to compare with upstream. The offending tree is missing f97bb5272d9e ("sched: Fix data-race in wakeup"). I thought the loadavg increase sounded familiar... Cheers, Phil > > Cheers, > Phil > > -- > --