On Wed, 9 Mar 2011 13:27:50 -0800 (PST) David Rientjes <rientjes@xxxxxxxxxx> wrote: > When a memcg is oom and current has already received a SIGKILL, then give > it access to memory reserves with a higher scheduling priority so that it > may quickly exit and free its memory. > > This is identical to the global oom killer and is done even before > checking for panic_on_oom: a pending SIGKILL here while panic_on_oom is > selected is guaranteed to have come from userspace; the thread only needs > access to memory reserves to exit and thus we don't unnecessarily panic > the machine until the kernel has no last resort to free memory. > > Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> > --- > mm/oom_kill.c | 11 +++++++++++ > 1 files changed, 11 insertions(+), 0 deletions(-) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -537,6 +537,17 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask) > unsigned int points = 0; > struct task_struct *p; > > + /* > + * If current has a pending SIGKILL, then automatically select it. The > + * goal is to allow it to allocate so that it may quickly exit and free > + * its memory. > + */ > + if (fatal_signal_pending(current)) { > + set_thread_flag(TIF_MEMDIE); > + boost_dying_task_prio(current, NULL); > + return; > + } > + > check_panic_on_oom(CONSTRAINT_MEMCG, gfp_mask, 0, NULL); > limit = mem_cgroup_get_limit(mem) >> PAGE_SHIFT; > read_lock(&tasklist_lock); The code duplication seems a bit gratuitous. Was it deliberate that mem_cgroup_out_of_memory() ignores the oom notifier callbacks? (Why does that notifier list exist at all? Wouldn't it be better to do this via a vmscan shrinker? Perhaps altered to be passed the scanning priority?) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>