On Wed, Dec 2, 2020 at 7:01 AM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > On Tue, Dec 1, 2020 at 1:25 PM Yang Shi <shy828301@xxxxxxxxx> wrote: > > > > When investigating a slab cache bloat problem, significant amount of > > negative dentry cache was seen, but confusingly they neither got shrunk > > by reclaimer (the host has very tight memory) nor be shrunk by dropping > > cache. The vmcore shows there are over 14M negative dentry objects on lru, > > but tracing result shows they were even not scanned at all. The further > > investigation shows the memcg's vfs shrinker_map bit is not set. So the > > reclaimer or dropping cache just skip calling vfs shrinker. So we have > > to reboot the hosts to get the memory back. > > > > I didn't manage to come up with a reproducer in test environment, and the > > problem can't be reproduced after rebooting. But it seems there is race > > between shrinker map bit clear and reparenting by code inspection. The > > hypothesis is elaborated as below. > > > > The memcg hierarchy on our production environment looks like: > > root > > / \ > > system user > > > > The main workloads are running under user slice's children, and it creates > > and removes memcg frequently. So reparenting happens very often under user > > slice, but no task is under user slice directly. > > > > So with the frequent reparenting and tight memory pressure, the below > > hypothetical race condition may happen: > > > > CPU A CPU B > > reparent > > dst->nr_items == 0 > > shrinker: > > total_objects == 0 > > add src->nr_items to dst > > set_bit > > retrun SHRINK_EMPTY > > return > > > clear_bit > > child memcg offline > > replace child's kmemcg_id to > > with > > > parent's (in memcg_offline_kmem()) > > list_lru_del() between shrinker runs > > see parent's kmemcg_id > > dec dst->nr_items > > reparent again > > dst->nr_items may go negative > > due to concurrent list_lru_del() > > > > The second run of shrinker: > > read nr_items without any > > synchronization, so it may > > see intermediate negative > > nr_items then total_objects > > may return 0 conincidently > > coincidently > > > > > keep the bit cleared > > dst->nr_items != 0 > > skip set_bit > > add scr->nr_item to dst > > > > After this point dst->nr_item may never go zero, so reparenting will not > > set shrinker_map bit anymore. And since there is no task under user > > slice directly, so no new object will be added to its lru to set the > > shrinker map bit either. That bit is kept cleared forever. > > > > How does list_lru_del() race with reparenting? It is because > > reparenting replaces childen's kmemcg_id to parent's without protecting > > children's > > > from nlru->lock, so list_lru_del() may see parent's kmemcg_id but > > actually deleting items from child's lru, but dec'ing parent's nr_items, > > so the parent's nr_items may go negative as commit > > 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and > > free kmemcg_id on css offline") says. > > > > Since it is impossible that dst->nr_items goes negative and > > src->nr_items goes zero at the same time, so it seems we could set the > > shrinker map bit iff src->nr_items != 0. We could synchronize > > list_lru_count_one() and reparenting with nlru->lock, but it seems > > checking src->nr_items in reparenting is the simplest and avoids lock > > contention. > > > > Fixes: fae91d6d8be5 ("mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance") > > Suggested-by: Roman Gushchin <guro@xxxxxx> > > Reviewed-by: Roman Gushchin <guro@xxxxxx> > > Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> > > Cc: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> > > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> > > Cc: <stable@xxxxxxxxxxxxxxx> v4.19+ > > Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> > > Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Thanks for finding those spelling errors. Will fix in v4.