On Wed, 10 Aug 2011, Mahmood Naderan wrote: > >If you're using cpusets or mempolicies, you must ensure that all tasks > >attached to either of them are not set to OOM_DISABLE. It seems unlikely > >that you're using those, so it seems like a system-wide oom condition. > > I didn't do that manually. What is the default behaviour? Does oom > working or not? > The default behavior is to kill all eligible and unkillable threads until there are none left to sacrifice (i.e. all kthreads and OOM_DISABLE). > For a user process: > > root@srv:~# cat /proc/18564/oom_score > 9198 > root@srv:~# cat /proc/18564/oom_adj > 0 > Ok, so you don't have a /proc/pid/oom_score_adj, so you're using a kernel that predates 2.6.36. > And for "init" process: > > root@srv:~# cat /proc/1/oom_score > 17509 > root@srv:~# cat /proc/1/oom_adj > 0 > > Based on my understandings, in an out of memory condition (oom), > the init process is more eligible to be killed!!!!!!! Is that right? > init is exempt from oom killing, it's oom_score is meaningless. > Again I didn't get my answer yet: > What is the default behavior of linux in an oom condition? If the default is, > crash (kernel panic), then how can I change that in such a way to kill > the hungry process? > You either have /proc/sys/vm/panic_on_oom set or it's killing a thread that is taking down the entire machine. If it's the latter, then please capture the kernel log and post it as Randy suggested.