On Feb 22, 2018 1:40 AM, "Peter Zijlstra" <peterz@xxxxxxxxxxxxx> wrote:
On Thu, Feb 22, 2018 at 11:06:33AM +0900, Minchan Kim wrote:
> On Wed, Feb 21, 2018 at 04:23:43PM -0800, Daniel Colascione wrote:
> > kernel/sched/core.c | 3 +++Obviously I completely hate that; and you really _should_ have Cc'ed me
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index a7bf32aabfda..7f197a7698ee 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3429,6 +3429,9 @@ asmlinkage __visible void __sched schedule(void)
> > struct task_struct *tsk = current;
> >
> > sched_submit_work(tsk);
> > + if (tsk->mm)
> > + sync_mm_rss(tsk->mm);
> > +
> > do {
> > preempt_disable();
> > __schedule(false);
> >
earlier ;-)
I thought I might get a reaction like that. :-)
That it still well over 100 cycles in the case when all counters did
change. Far _far_ more if the mm counters are contended (up to 150 times
more is quite possible).
I suppose it doesn't help to sync the counters only when dirty, detecting this situation with a task status flag or something?
> > > > Ping? Is this approach just a bad idea? We could instead just manually syncYou could just iterate the thread group and call it a day. Yes strictly
> > > > all mm-attached tasks at counter-retrieval time.
> > >
> > > IMHO, yes, it should be done when user want to see which would be really
> > > cold path while this shecule function is hot.
> > >
> >
> > The problem with doing it that way is that we need to look at each task
> > attached to a particular mm. AFAIK (and please tell me if I'm wrong), the
> > only way to do that is to iterate over all processes, and for each process
> > attached to the mm we want, iterate over all its tasks (since each one has
> > to have the same mm, I think). Does that sound right?
speaking its possible to have mm's shared outside the thread group,
practically that 'never' happens.
CLONE_VM without CLONE_THREAD just isn't a popular thing afaik.
So while its not perfect, it might well be good enough.
Take a look at the other patch I posted. Seems to work.