Re: [patch 0/7] improve memcg oom killer robustness v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 17-09-13 13:15:35, azurIt wrote:
[...]
> Is something unusual on this stack?
> 
> 
> [<ffffffff810d1a5e>] dump_header+0x7e/0x1e0
> [<ffffffff810d195f>] ? find_lock_task_mm+0x2f/0x70
> [<ffffffff810d1f25>] oom_kill_process+0x85/0x2a0
> [<ffffffff810d24a8>] mem_cgroup_out_of_memory+0xa8/0xf0
> [<ffffffff8110fb76>] mem_cgroup_oom_synchronize+0x2e6/0x310
> [<ffffffff8110efc0>] ? mem_cgroup_uncharge_page+0x40/0x40
> [<ffffffff810d2703>] pagefault_out_of_memory+0x13/0x130
> [<ffffffff81026f6e>] mm_fault_error+0x9e/0x150
> [<ffffffff81027424>] do_page_fault+0x404/0x490
> [<ffffffff810f952c>] ? do_mmap_pgoff+0x3dc/0x430
> [<ffffffff815cb87f>] page_fault+0x1f/0x30

This is a regular memcg OOM killer. Which dumps messages about what is
going to do. So no, nothing unusual, except if it was like that for ever
which would mean that oom_kill_process is in the endless loop. But a
single stack doesn't tell us much.

Just a note. When you see something hogging a cpu and you are not sure
whether it might be in an endless loop inside the kernel it makes sense
to take several snaphosts of the stack trace and see if it changes. If
not and the process is not sleeping (there is no schedule on the trace)
then it might be looping somewhere waiting for Godot. If it is sleeping
then it is slightly harder because you would have to identify what it is
waiting for which requires to know a deeper context.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]