Re: [patch 0/7] improve memcg oom killer robustness v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> CC: "Johannes Weiner" <hannes@xxxxxxxxxxx>, "Andrew Morton" <akpm@xxxxxxxxxxxxxxxxxxxx>, "David Rientjes" <rientjes@xxxxxxxxxx>, "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@xxxxxxxxxxxxxx>, "KOSAKI Motohiro" <kosaki.motohiro@xxxxxxxxxxxxxx>, linux-mm@xxxxxxxxx, cgroups@xxxxxxxxxxxxxxx, x86@xxxxxxxxxx, linux-arch@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
>On Wed 18-09-13 16:03:04, azurIt wrote:
>[..]
>> I was finally able to get stack of problematic process :) I saved it
>> two times from the same process, as Michal suggested (i wasn't able to
>> take more). Here it is:
>> 
>> First (doesn't look very helpfull):
>> [<ffffffffffffffff>] 0xffffffffffffffff
>
>No it is not.
> 
>> Second:
>> [<ffffffff810e17d1>] shrink_zone+0x481/0x650
>> [<ffffffff810e2ade>] do_try_to_free_pages+0xde/0x550
>> [<ffffffff810e310b>] try_to_free_pages+0x9b/0x120
>> [<ffffffff81148ccd>] free_more_memory+0x5d/0x60
>> [<ffffffff8114931d>] __getblk+0x14d/0x2c0
>> [<ffffffff8114c973>] __bread+0x13/0xc0
>> [<ffffffff811968a8>] ext3_get_branch+0x98/0x140
>> [<ffffffff81197497>] ext3_get_blocks_handle+0xd7/0xdc0
>> [<ffffffff81198244>] ext3_get_block+0xc4/0x120
>> [<ffffffff81155b8a>] do_mpage_readpage+0x38a/0x690
>> [<ffffffff81155ffb>] mpage_readpages+0xfb/0x160
>> [<ffffffff811972bd>] ext3_readpages+0x1d/0x20
>> [<ffffffff810d9345>] __do_page_cache_readahead+0x1c5/0x270
>> [<ffffffff810d9411>] ra_submit+0x21/0x30
>> [<ffffffff810cfb90>] filemap_fault+0x380/0x4f0
>> [<ffffffff810ef908>] __do_fault+0x78/0x5a0
>> [<ffffffff810f2b24>] handle_pte_fault+0x84/0x940
>> [<ffffffff810f354a>] handle_mm_fault+0x16a/0x320
>> [<ffffffff8102715b>] do_page_fault+0x13b/0x490
>> [<ffffffff815cb87f>] page_fault+0x1f/0x30
>> [<ffffffffffffffff>] 0xffffffffffffffff
>
>This is the direct reclaim path. You are simply running out of memory
>globaly. There is no memcg specific code in that trace.


No, i'm not. Here is htop and server graphs from this case:
http://watchdog.sk/lkml/htop3.jpg (here you can see actual memory usage)
http://watchdog.sk/lkml/server01.jpg

If i was really having global OOM (which i'm not for 101%) where that i/o comes from? I have no swap.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]