On Mon, Nov 30, 2020 at 10:45:14AM -0800, Yang Shi wrote: > When investigating a slab cache bloat problem, significant amount of > negative dentry cache was seen, but confusingly they neither got shrunk > by reclaimer (the host has very tight memory) nor be shrunk by dropping > cache. The vmcore shows there are over 14M negative dentry objects on lru, > but tracing result shows they were even not scanned at all. The further > investigation shows the memcg's vfs shrinker_map bit is not set. So the > reclaimer or dropping cache just skip calling vfs shrinker. So we have > to reboot the hosts to get the memory back. > > I didn't manage to come up with a reproducer in test environment, and the > problem can't be reproduced after rebooting. But it seems there is race > between shrinker map bit clear and reparenting by code inspection. The > hypothesis is elaborated as below. > > The memcg hierarchy on our production environment looks like: > root > / \ > system user > > The main workloads are running under user slice's children, and it creates > and removes memcg frequently. So reparenting happens very often under user > slice, but no task is under user slice directly. > > So with the frequent reparenting and tight memory pressure, the below > hypothetical race condition may happen: > > CPU A CPU B CPU C > reparent > dst->nr_items == 0 > shrinker: > total_objects == 0 > add src->nr_items to dst > set_bit > retrun SHRINK_EMPTY > clear_bit > list_lru_del() > reparent again > dst->nr_items may go negative > due to current list_lru_del() > on CPU C > The second run of shrinker: > read nr_items without any > synchronization, so it may > see intermediate negative > nr_items then total_objects > may return 0 conincidently > > keep the bit cleared > dst->nr_items != 0 > skip set_bit > add scr->nr_item to dst > > After this point dst->nr_item may never go zero, so reparenting will not > set shrinker_map bit anymore. And since there is no task under user > slice directly, so no new object will be added to its lru to set the > shrinker map bit either. That bit is kept cleared forever. > > How does list_lru_del() race with reparenting? It is because > reparenting replaces childen's kmemcg_id to parent's without protecting > from nlru->lock, so list_lru_del() may see parent's kmemcg_id but > actually deleting items from child's lru, but dec'ing parent's nr_items, > so the parent's nr_items may go negative as commit > 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and > free kmemcg_id on css offline") says. > > Can we move kmemcg_id replacement after reparenting? No, because the > race with list_lru_del() may result in negative src->nr_items, but it > will never be fixed. So the shrinker may never return SHRINK_EMPTY then > keep the shrinker map bit set always. The shrinker will be always > called for nonsense. > > Can we synchronize list_lru_del() and reparenting? Yes, it could be > done. But it seems we need introduce a new lock or use nlru->lock. But > it sounds complicated to move kmemcg_id replacement code under nlru->lock. > And list_lru_del() may be called quite often to exacerbate some hot > path, i.e. dentry kill. > > So, it sounds acceptable to synchronize reading nr_items to avoid seeing > intermediate negative nr_items given the simplicity and it is typically > just called by shrinkers when counting the freeable objects. > > The patch is tested with some shrinker intensive workloads, no > noticeable regression is soptted. Hi Yang! It's really tricky, thank you for digging in! It's a perfect analysis! I wonder though, if it's better to just always set the shrinker bit on reparenting if we do reparent some items? Then we'll avoid adding new synchronization to the hot path. What do you think? -- @@ -534,7 +534,6 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, struct list_lru_node *nlru = &lru->node[nid]; int dst_idx = dst_memcg->kmemcg_id; struct list_lru_one *src, *dst; - bool set; /* * Since list_lru_{add,del} may be called under an IRQ-safe lock, @@ -546,9 +545,8 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, dst = list_lru_from_memcg_idx(nlru, dst_idx); list_splice_init(&src->list, &dst->list); - set = (!dst->nr_items && src->nr_items); dst->nr_items += src->nr_items; - if (set) + if (src->nr_items) memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; -- Btw, it seems that the bug is quite old. I wonder why we haven't seen it before? Any ideas? Thanks! > > Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> > Cc: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> > Cc: Roman Gushchin <guro@xxxxxx> > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> > Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> > --- > mm/list_lru.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/mm/list_lru.c b/mm/list_lru.c > index 5aa6e44bc2ae..5c128a7710ff 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -178,10 +178,17 @@ unsigned long list_lru_count_one(struct list_lru *lru, > struct list_lru_one *l; > unsigned long count; > > - rcu_read_lock(); > + /* > + * Since list_lru_{add,del} may be called under an IRQ-safe lock, > + * we have to use IRQ-safe primitives here to avoid deadlock. > + * > + * Hold the lock to prevent from seeing transient negative > + * nr_items value. > + */ > + spin_lock_irq(&nlru->lock); > l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); > count = READ_ONCE(l->nr_items); > - rcu_read_unlock(); > + spin_unlock_irq(&nlru->lock); > > return count; > } > -- > 2.26.2 >