On Tue 04-01-22 13:22:25, Yu Zhao wrote: [...] > +static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) > +{ > + struct mem_cgroup *memcg; > + bool success = false; > + unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl); > + > + VM_BUG_ON(!current_is_kswapd()); > + > + current->reclaim_state->mm_walk = &pgdat->mm_walk; > + > + memcg = mem_cgroup_iter(NULL, NULL, NULL); > + do { > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); > + > + if (age_lruvec(lruvec, sc, min_ttl)) > + success = true; > + > + cond_resched(); > + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); > + > + if (!success && mutex_trylock(&oom_lock)) { > + struct oom_control oc = { > + .gfp_mask = sc->gfp_mask, > + .order = sc->order, > + }; > + > + if (!oom_reaping_in_progress()) > + out_of_memory(&oc); > + > + mutex_unlock(&oom_lock); > + } Why do you need to trigger oom killer from this path? Why cannot you rely on the page allocator to do that like we do now? -- Michal Hocko SUSE Labs