út 30. 8. 2022 v 12:24 odesílatel Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> napsal: > > On 2022-08-29 17:48:05 [+0200], Maurizio Lombardi wrote: > > diff --git a/mm/slub.c b/mm/slub.c > > index 862dbd9af4f5..d46ee90651d2 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2681,30 +2681,34 @@ struct slub_flush_work { > > bool skip; > > }; > > > > +static void flush_cpu_slab(void *d) > > +{ > > + struct kmem_cache *s = d; > > + struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); > > + > > + if (c->slab) > > + flush_slab(s, c); > > + > > + unfreeze_partials(s); > > +} > … > > @@ -2721,13 +2725,18 @@ static void flush_all_cpus_locked(struct kmem_cache *s) > > lockdep_assert_cpus_held(); > > mutex_lock(&flush_lock); > > > > + if (in_task()) { > > + on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1); > > This blocks with disabled preemption until it completes flush_cpu_slab() > on all CPUs. > That function acquires a local_lock_t which can not be > acquired from in-IRQ which is where this function will be invoked due to > on_each_cpu_cond(). Hmm, this is not good indeed. I guess I should have used for_each_online_cpu() instead of on_each_cpu_cond(). > > Couldn't we instead use a workqueue with that WQ_MEM_RECLAIM bit? It may > reclaim memory after all ;) That should also fix it, do you think it would be ok to allocate a workqueue in in kmem_cache_init() ? Thanks, Maurizio