On Mon, Aug 06, 2018 at 05:25:28PM +0200, Peter Zijlstra wrote: > On Mon, Aug 06, 2018 at 11:05:50AM -0400, Johannes Weiner wrote: > > Argh, that's right. This needs an explicit count if we want to access > > it locklessly. And you already said you didn't like that this is the > > only state not derived purely from the task counters, so maybe this is > > the way to go after all. > > > > How about something like this (untested)? > > > > +static inline void psi_switch(struct rq *rq, struct task_struct *prev, > > + struct task_struct *next) > > +{ > > + if (psi_disabled) > > + return; > > + > > + if (unlikely(prev->flags & PF_MEMSTALL)) > > + psi_task_change(prev, rq_clock(rq), TSK_RECLAIMING, 0); > > + if (unlikely(next->flags & PF_MEMSTALL)) > > + psi_task_change(next, rq_clock(rq), 0, TSK_RECLAIMING); > > +} > > > Urgh... can't say I really like that. > > I would really rather do that scheduler_tick() thing to avoid the remote > update. The tick is a lot less hot than the switch path and esp. > next->flags might be a cold line (prev->flags is typically the same line > as prev->state so we already have that, but I don't think anybody now > looks at next->flags or its line, so that'd be cold load). Okay, the tick updater sounds like a much better option then. HZ frequency should produce more than recent enough data. That means we will retain the not-so-nice PF_MEMSTALL flag test under rq lock, but it'll eliminate most of that memory ordering headache. I'll do that. Thanks! -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html