Re: [PATCH] mm,page_alloc: Serialize warn_alloc() if schedulable.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tetsuo Handa wrote:
> I'm failing to test your "mm, oom: fix oom_reaper fallouts" patches using
> http://lkml.kernel.org/r/201708072228.FAJ09347.tOOVOFFQJSHMFL@xxxxxxxxxxxxxxxxxxx
> because it fails to invoke the OOM killer for unknown reason. I analyzed it using
> kmallocwd and confirmed that two dozens of concurrent allocating threads is
> sufficient for hitting this warn_alloc() v.s. printk() lockup.
> Since printk offloading is not yet available, serialization is the the only choice
> we can mitigate this problem for now. How long will we have to waste more?

Above explanation is not appropriate.

As a part of testing your "mm, oom: fix oom_reaper fallouts" patches, I'm trying
to use various stress patterns including changing number of threads. For some
unknown reason (though it is not caused by your patches), sometimes it takes
too much long time (order of minutes) to invoke the OOM killer despite warn_alloc()
is periodically printed or not printed at all. In order to find out what stage is
taking so long time, I'm using kmallocwd with 1 second timeout, for SysRq-t etc. are
useless for tracking how long threads are waiting at specific location because they
do not have in-flight allocation information while they take many seconds for showing
too much noise (including idle threads simply sleeping). And due to kmallocwd's 1 second
timeout setting (which I want to use for finding out at what stage allocating threads
are waiting for so long), printk() from out_of_memory() with oom_lock held is trapped
by periodical printk() by kmallocwd, and caused lockup.

So, this is not direct warn_alloc() v.s. printk() lockup. But what the kmallocwd
tries to warn is similar to warn_alloc(). I can't automate testing of "mm, oom:
fix oom_reaper fallouts" patches because I sometimes need to use SysRq-f etc. to
unstuck the test.

> 
> ----------
> [  645.993827] MemAlloc-Info: stalling=18 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [  645.996694] MemAlloc: vmtoolsd(2221) flags=0x400100 switches=5607 seq=3740 gfp=0x14200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=7541
> [  645.996695] vmtoolsd        R  running task    11960  2221      1 0x00000080
> [  645.996699] Call Trace:
> [  645.996708]  ? console_unlock+0x373/0x4a0
> [  645.996709]  ? vprintk_emit+0x211/0x2f0
> [  645.996714]  ? vprintk_emit+0x21a/0x2f0
> [  645.996720]  ? vprintk_default+0x1a/0x20
> [  645.996722]  ? vprintk_func+0x22/0x60
> [  645.996724]  ? printk+0x53/0x6a
> [  645.996731]  ? dump_stack_print_info+0xab/0xb0
> [  645.996736]  ? dump_stack+0x5e/0x9e
> [  645.996739]  ? dump_header+0x9d/0x3fa
> [  645.996744]  ? trace_hardirqs_on+0xd/0x10
> [  645.996751]  ? oom_kill_process+0x226/0x650
> [  645.996757]  ? out_of_memory+0x13d/0x570
> [  645.996758]  ? out_of_memory+0x20d/0x570
> [  645.996763]  ? __alloc_pages_nodemask+0xbc8/0xed0
> [  645.996780]  ? alloc_pages_current+0x65/0xb0
> [  645.996784]  ? __page_cache_alloc+0x10b/0x140
> [  645.996789]  ? filemap_fault+0x3df/0x6a0
> [  645.996790]  ? filemap_fault+0x2ab/0x6a0
> [  645.996797]  ? xfs_filemap_fault+0x34/0x50
> [  645.996799]  ? __do_fault+0x19/0x120
> [  645.996803]  ? __handle_mm_fault+0xa99/0x1260
> [  645.996814]  ? handle_mm_fault+0x1b2/0x350
> [  645.996816]  ? handle_mm_fault+0x46/0x350
> [  645.996820]  ? __do_page_fault+0x1da/0x510
> [  645.996828]  ? do_page_fault+0x21/0x70
> [  645.996832]  ? page_fault+0x22/0x30
> (...snipped...)
> [  645.998748] MemAlloc-Info: stalling=18 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [ 1472.484590] MemAlloc-Info: stalling=25 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [ 1472.487341] MemAlloc: vmtoolsd(2221) flags=0x400100 switches=5607 seq=3740 gfp=0x14200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=834032
> [ 1472.487342] vmtoolsd        R  running task    11960  2221      1 0x00000080
> [ 1472.487346] Call Trace:
> [ 1472.487353]  ? console_unlock+0x373/0x4a0
> [ 1472.487355]  ? vprintk_emit+0x211/0x2f0
> [ 1472.487360]  ? vprintk_emit+0x21a/0x2f0
> [ 1472.487367]  ? vprintk_default+0x1a/0x20
> [ 1472.487369]  ? vprintk_func+0x22/0x60
> [ 1472.487370]  ? printk+0x53/0x6a
> [ 1472.487377]  ? dump_stack_print_info+0xab/0xb0
> [ 1472.487381]  ? dump_stack+0x5e/0x9e
> [ 1472.487384]  ? dump_header+0x9d/0x3fa
> [ 1472.487389]  ? trace_hardirqs_on+0xd/0x10
> [ 1472.487396]  ? oom_kill_process+0x226/0x650
> [ 1472.487402]  ? out_of_memory+0x13d/0x570
> [ 1472.487403]  ? out_of_memory+0x20d/0x570
> [ 1472.487408]  ? __alloc_pages_nodemask+0xbc8/0xed0
> [ 1472.487426]  ? alloc_pages_current+0x65/0xb0
> [ 1472.487429]  ? __page_cache_alloc+0x10b/0x140
> [ 1472.487434]  ? filemap_fault+0x3df/0x6a0
> [ 1472.487435]  ? filemap_fault+0x2ab/0x6a0
> [ 1472.487441]  ? xfs_filemap_fault+0x34/0x50
> [ 1472.487444]  ? __do_fault+0x19/0x120
> [ 1472.487447]  ? __handle_mm_fault+0xa99/0x1260
> [ 1472.487459]  ? handle_mm_fault+0x1b2/0x350
> [ 1472.487460]  ? handle_mm_fault+0x46/0x350
> [ 1472.487465]  ? __do_page_fault+0x1da/0x510
> [ 1472.487472]  ? do_page_fault+0x21/0x70
> [ 1472.487476]  ? page_fault+0x22/0x30
> (...snipped...)
> [ 1472.489975] MemAlloc-Info: stalling=25 dying=0 exiting=0 victim=0 oom_count=29
> ----------
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux