On Mon, 25 April 2011 Pekka Enberg <penberg@xxxxxxxxxx> wrote: > On Mon, Apr 25, 2011 at 12:17 PM, Bruno PrÃmont > <bonbons@xxxxxxxxxxxxxxxxx> wrote: > > On Mon, 25 April 2011 Mike Frysinger wrote: > >> On Sun, Apr 24, 2011 at 22:42, KOSAKI Motohiro wrote: > >> >> On Sun, 24 April 2011 Bruno PrÃmont wrote: > >> >> > On an older system I've been running Gentoo's revdep-rebuild to check > >> >> > for system linking/*.la consistency and after doing most of the work the > >> >> > system starved more or less, just complaining about stuck tasks now and > >> >> > then. > >> >> > Memory usage graph as seen from userspace showed sudden quick increase of > >> >> > memory usage though only a very few MB were swapped out (c.f. attached RRD > >> >> > graph). > >> >> > >> >> Seems I've hit it once again (though detected before system was fully > >> >> stalled by trying to reclaim memory without success). > >> >> > >> >> This time it was during simple compiling... > >> >> Gathered info below: > >> >> > >> >> /proc/meminfo: > >> >> MemTotal:     480660 kB > >> >> MemFree:      64948 kB > >> >> Buffers:      10304 kB > >> >> Cached:       6924 kB > >> >> SwapCached:     4220 kB > >> >> Active:      Â11100 kB > >> >> Inactive:     Â15732 kB > >> >> Active(anon):    4732 kB > >> >> Inactive(anon):   4876 kB > >> >> Active(file):    6368 kB > >> >> Inactive(file):  Â10856 kB > >> >> Unevictable:     Â32 kB > >> >> Mlocked:       Â32 kB > >> >> SwapTotal:    Â524284 kB > >> >> SwapFree:     456432 kB > >> >> Dirty:        Â80 kB > >> >> Writeback:       0 kB > >> >> AnonPages:     Â6268 kB > >> >> Mapped:       2604 kB > >> >> Shmem:         4 kB > >> >> Slab:       250632 kB > >> >> SReclaimable:   Â51144 kB > >> >> SUnreclaim:    199488 kB  <--- look big as well... > >> >> KernelStack:   Â131032 kB  <--- what??? > >> > > >> > KernelStack is used 8K bytes per thread. then, your system should have > >> > 16000 threads. but your ps only showed about 80 processes. > >> > Hmm... stack leak? > >> > >> i might have a similar report for 2.6.39-rc4 (seems to be working fine > >> in 2.6.38.4), but for embedded Blackfin systems running gdbserver > >> processes over and over (so lots of short lived forks) > >> > >> i wonder if you have a lot of zombies or otherwise unclaimed resources > >> ? Âdoes `ps aux` show anything unusual ? > > > > I've not seen anything special (no big amount of threads behind my about 80 > > processes, even after kernel oom-killed nearly all processes the hogged > > memory has not been freed. And no, there are no zombies around). > > > > Here it seems to happened when I run 2 intensive tasks in parallel, e.g. > > (re)emerging gimp and running revdep-rebuild -pi in another terminal. > > This produces a fork rate of about 100-300 per second. > > > > Suddenly kmalloc-128 slabs stop being freed and things degrade. > > > > Trying to trace some of the kmalloc-128 slab allocations I end up seeing > > lots of allocations like this: > > > > [ 1338.554429] TRACE kmalloc-128 alloc 0xc294ff00 inuse=30 fp=0xc294ff00 > > [ 1338.554434] Pid: 1573, comm: collectd Tainted: G    ÂW  2.6.39-rc4-jupiter-00187-g686c4cb #1 > > [ 1338.554437] Call Trace: > > [ 1338.554442] Â[<c10aef47>] trace+0x57/0xa0 > > [ 1338.554447] Â[<c10b07b3>] alloc_debug_processing+0xf3/0x140 > > [ 1338.554452] Â[<c10b0972>] T.999+0x172/0x1a0 > > [ 1338.554455] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.554459] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.554464] Â[<c10b0a52>] kmem_cache_alloc+0xb2/0x100 > > [ 1338.554468] Â[<c10c08b5>] ? path_put+0x15/0x20 > > [ 1338.554472] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.554476] Â[<c10b95d8>] get_empty_filp+0x58/0xc0 > > [ 1338.554481] Â[<c10c323f>] path_openat+0x1f/0x320 > > [ 1338.554485] Â[<c10a0a4e>] ? __access_remote_vm+0x19e/0x1d0 > > [ 1338.554490] Â[<c10c3620>] do_filp_open+0x30/0x80 > > [ 1338.554495] Â[<c10b0a30>] ? kmem_cache_alloc+0x90/0x100 > > [ 1338.554500] Â[<c10c16f8>] ? getname_flags+0x28/0xe0 > > [ 1338.554505] Â[<c10cd522>] ? alloc_fd+0x62/0xe0 > > [ 1338.554509] Â[<c10c1731>] ? getname_flags+0x61/0xe0 > > [ 1338.554514] Â[<c10b781d>] do_sys_open+0xed/0x1e0 > > [ 1338.554519] Â[<c10b7979>] sys_open+0x29/0x40 > > [ 1338.554524] Â[<c1391390>] sysenter_do_call+0x12/0x26 > > [ 1338.556764] TRACE kmalloc-128 alloc 0xc294ff80 inuse=31 fp=0xc294ff80 > > [ 1338.556774] Pid: 1332, comm: bash Tainted: G    ÂW  2.6.39-rc4-jupiter-00187-g686c4cb #1 > > [ 1338.556779] Call Trace: > > [ 1338.556794] Â[<c10aef47>] trace+0x57/0xa0 > > [ 1338.556802] Â[<c10b07b3>] alloc_debug_processing+0xf3/0x140 > > [ 1338.556807] Â[<c10b0972>] T.999+0x172/0x1a0 > > [ 1338.556812] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.556817] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.556821] Â[<c10b0a52>] kmem_cache_alloc+0xb2/0x100 > > [ 1338.556826] Â[<c10b95d8>] ? get_empty_filp+0x58/0xc0 > > [ 1338.556830] Â[<c10b95d8>] get_empty_filp+0x58/0xc0 > > [ 1338.556841] Â[<c121fca8>] ? tty_ldisc_deref+0x8/0x10 > > [ 1338.556849] Â[<c10c323f>] path_openat+0x1f/0x320 > > [ 1338.556857] Â[<c11e2b3e>] ? fbcon_cursor+0xfe/0x180 > > [ 1338.556863] Â[<c10c3620>] do_filp_open+0x30/0x80 > > [ 1338.556868] Â[<c10b0a30>] ? kmem_cache_alloc+0x90/0x100 > > [ 1338.556873] Â[<c10c5e8e>] ? do_vfs_ioctl+0x7e/0x580 > > [ 1338.556878] Â[<c10c16f8>] ? getname_flags+0x28/0xe0 > > [ 1338.556886] Â[<c10cd522>] ? alloc_fd+0x62/0xe0 > > [ 1338.556891] Â[<c10c1731>] ? getname_flags+0x61/0xe0 > > [ 1338.556898] Â[<c10b781d>] do_sys_open+0xed/0x1e0 > > [ 1338.556903] Â[<c10b7979>] sys_open+0x29/0x40 > > [ 1338.556913] Â[<c1391390>] sysenter_do_call+0x12/0x26 > > > > Collectd is system monitoring daemon that counts processes, memory > > usage an much more, reading lots of files under /proc every 10 > > seconds. > > Maybe it opens a process related file at a racy moment and thus > > prevents the 128 slabs and kernel stacks from being released? > > > > Replaying the scenario I'm at: > > Slab:       Â43112 kB > > SReclaimable:   Â25396 kB > > SUnreclaim:    Â17716 kB > > KernelStack:    16432 kB > > PageTables:     1320 kB > > > > with > > kmalloc-256      55   64  Â256  16  Â1 : tunables  Â0  Â0  Â0 : slabdata   Â4   Â4   Â0 > > kmalloc-128    Â66656 Â66656  Â128  32  Â1 : tunables  Â0  Â0  Â0 : slabdata  2083  2083   Â0 > > kmalloc-64     Â3902  3904   64  64  Â1 : tunables  Â0  Â0  Â0 : slabdata   61   61   Â0 > > > > (and compiling process tree now SIGSTOPped in order to have system > > not starve immediately so I can look around for information) > > > > If I resume one of the compiling process trees both KernelStack and > > slab (kmalloc-128) usage increase quite quickly (and seems to never > > get down anymore) - probably at same rate as processes get born (no > > matter when they end). > > Looks like it might be a leak in VFS. You could try kmemleak to narrow > it down some more. See Documentation/kmemleak.txt for details. Hm, seems not to be willing to let me run kmemleak... each time I put on my load scenario I get "BUG: unable to handle kernel " on console as a last breath from the system. (the rest of the trace never shows up) Going to try harder to get at least a complete trace... Bruno > Pekka -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html