On Tue 14-07-20 21:25:04, Yafang Shao wrote: > On Tue, Jul 14, 2020 at 8:37 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote: [...] > > > @@ -1560,16 +1560,31 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > > .gfp_mask = gfp_mask, > > > .order = order, > > > }; > > > - bool ret; > > > + bool ret = true; > > > > > > if (mutex_lock_killable(&oom_lock)) > > > return true; > > > + > > > /* > > > * A few threads which were not waiting at mutex_lock_killable() can > > > * fail to bail out. Therefore, check again after holding oom_lock. > > > */ > > > - ret = should_force_charge() || out_of_memory(&oc); > > > + if (should_force_charge()) > > > + goto out; > > > + > > > + /* > > > + * Different tasks may be doing parallel oom, so after hold the > > > + * oom_lock the task should check the memcg margin again to check > > > + * whether other task has already made progress. > > > + */ > > > + if (mem_cgroup_margin(memcg) >= (1 << order)) > > > + goto out; > > > > Is there any reason why you simply haven't done this? (+ your comment > > which is helpful). > > > > No strong reason. > I just think that threads of a multi-thread task are more likely to do > parallel OOM, so I checked it first. > I can change it as you suggested below, as it is more simple. I would rather go with simplicity. This is a super slow path so ordering of checks shouldn't matter much (if at all). -- Michal Hocko SUSE Labs