Re: [PATCH v3 5/5] mm/memcg: Protect memcg_stock with a local_lock_t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 21-02-22 17:44:13, Sebastian Andrzej Siewior wrote:
> On 2022-02-21 17:24:41 [+0100], Michal Hocko wrote:
> > > > > @@ -2282,14 +2288,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> > > > >  		rcu_read_unlock();
> > > > >  
> > > > >  		if (flush &&
> > > > > -		    !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
> > > > > -			if (cpu == curcpu)
> > > > > -				drain_local_stock(&stock->work);
> > > > > -			else
> > > > > -				schedule_work_on(cpu, &stock->work);
> > > > > -		}
> > > > > +		    !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
> > > > > +			schedule_work_on(cpu, &stock->work);
> > > > 
> > > > Maybe I am missing but on !PREEMPT kernels there is nothing really
> > > > guaranteeing that the worker runs so there should be cond_resched after
> > > > the mutex is unlocked. I do not think we want to rely on callers to be
> > > > aware of this subtlety.
> > > 
> > > There is no guarantee on PREEMPT kernels, too. The worker will be made
> > > running and will be put on the CPU when the scheduler sees it fit and
> > > there could be other worker which take precedence (queued earlier).
> > > But I was not aware that the worker _needs_ to run before we return.
> > 
> > A lack of draining will not be a correctness problem (sorry I should
> > have made that clear). It is more about subtlety than anything. E.g. the
> > charging path could be forced to memory reclaim because of the cached
> > charges which are still waiting for their draining. Not really something
> > to lose sleep over from the runtime perspective. I was just wondering
> > that this makes things more complex than necessary.
> 
> So it is no strictly wrong but it would be better if we could do
> drain_local_stock() on the local CPU.
> 
> > > We
> > > might get migrated after put_cpu() so I wasn't aware that this is
> > > important. Should we attempt best effort and wait for the worker on the
> > > current CPU?
> > 
> > 
> > > > An alternative would be to split out __drain_local_stock which doesn't
> > > > do local_lock.
> > > 
> > > but isn't the section in drain_local_stock() unprotected then?
> > 
> > local_lock instead of {get,put}_cpu would handle that right?
> 
> It took a while, but it clicked :)
> If we acquire the lock_lock_t, that we would otherwise acquire in
> drain_local_stock(), before the for_each_cpu loop (as you say
> get,pu_cpu) then we would indeed need __drain_local_stock() and things
> would work. But it looks like an abuse of the lock to avoid CPU
> migration since there is no need to have it acquired at this point. Also
> the whole section would run with disabled interrupts and there is no
> need for it.
> 
> What about if replace get_cpu() with migrate_disable()? 

Yeah, that would be a better option. I am just not used to think in RT
so migrate_disable didn't really come to my mind.

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux