Re: Soft lockup problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,
It finally did it again, it took longer then I expected, it also
locked itself up so bad that I couldn't get into it to hit
ctrl+alt+sysrq+w ..
I had turned on the debugging feature that automatically logs the hung
tasks, and I've attached the log below, I hope it's helpful.

I was running 3.2.4 from kernel.org on a 4 core Xeon machine:
model name	: Intel(R) Xeon(R) CPU            5140  @ 2.33GHz

6GB Ram

2x Intel 80003ES2LAN Gigabit Ethernet Controllers bonded together
2 LSI SAS controllers:
08:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E
PCI-Express Fusion-MPT SAS (rev 08)
0a:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS
2008 [Falcon] (rev 03)

16  drives in a mix of 2 and 3TB, in 3 raid5 arrays and combined
together with LVM
/dev/mapper/pool-main   23T   12T   11T  52%
for a 23TB volume formatted with XFS.

The root partition is ext4 on an older SATA drive, the reason I bring
this up is that when I hit (on a whim) ctrl+sysrq+J that is supposed
to unfreeze frozen filesystems, the console started dumping lots of
messages about attempting to unfreeze /dev/sda3 [my root partition] so
maybe there's a problem with my sda drive.
But I get no i/o or other errors in my logs at all. I monitor all
drives with smartd to head off any drive failures before they happen
and it seems to think sda is fine.

Hopefully my attached log helps.
I appreciate any input, also please call me an idiot if I'm missing
something obvious.

-Gerard Saraber


On Tue, Feb 7, 2012 at 10:54 AM, Jan Kara <jack@xxxxxxx> wrote:
> On Tue 07-02-12 10:35:37, Gerard Saraber wrote:
>> On Mon, Feb 6, 2012 at 4:51 PM, Jan Kara <jack@xxxxxxx> wrote:
>> > On Mon 06-02-12 09:40:45, Gerard Saraber wrote:
>> >> Greetings everyone,
>> >> I've been having a bit of a problem since upgrading to the linux 3.x
>> >> series, I have a machine that we're using as a NAS that runs various
>> >> rsync processes (mostly at night), lately after a day or two, I will
>> >> come in in the morning to a load average of 49, but the machine not
>> >> really doing anything, when trying to run 'dstat' the command just
>> >> hung with no output at all. there were no errors in the logs, or even
>> >> anything that would vaguely point at anything I could work with.
>> >> So needing to get the machine back to work I attempted to reboot it
>> >> "shutdown -r now" on console... it gives a nice message saying it's
>> >> going to reboot, but nothing ever happens.. the only way to reboot it
>> >> is by using ctrl + alt + sysrq + b. after which the machine reboots
>> >> and the raid array comes back clean.
>> >>
>> >> I'm not sure how to troubleshoot this, any pointers would be appreciated.
>> >>
>> >> I'm compiling 3.2.4 at the moment and found a bunch of possibly useful
>> >> options in the kernel debugging section:
>> >> detect hard/soft lockups and detect hung tasks, maybe it'll give me
>> >> something more to go on.
>> >>
>> >> Some details about the machine:
>> >> Linux xenbox 3.2.2 #1 SMP Sun Jan 29 10:28:22 CST 2012 x86_64 Intel(R)
>> >> Xeon(R) CPU 5140 @ 2.33GHz GenuineIntel GNU/Linux
>> >> It has 3 software raid arrays (2 x 5 drives and 1 x 4 drives) LVM'ed
>> >> together into a 23TB XFS filesystem.
>> >> 6GB memory and a pair of Intel Gigabit ethernet controllers bonded together.
>> >  Hmm, might be some deadlock in the filesystem. Adding XFS guys to CC.
>> > Can you run 'echo w >/proc/sysrq-trigger' and post output of dmesg here?
>> >
>> >                                                                Honza
>> > --
>> > Jan Kara <jack@xxxxxxx>
>> > SUSE Labs, CR
>>
>> Thanks for the quick reply,
>> the machine is running good at the moment so I'm not sure if the
>> output helps, but here it is:
>> [I'll also be sure to grab this log the next time it locks]
>  Yeah. Sorry, I was not clear but I meant you should grab the traces when
> the machine locks up again...
>                                                                Honza
>
> --
> Jan Kara <jack@xxxxxxx>
> SUSE Labs, CR
Feb 26 09:41:19 [kernel] [1726920.709038] INFO: task kswapd0:590 blocked for more than 120 seconds.
Feb 26 09:41:19 [kernel] [1726920.709042] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 09:41:19 [kernel] [1726920.709045] kswapd0         D 0000000000000000  1800   590      2 0x00000000
Feb 26 09:41:19 [kernel] [1726920.709052]  ffff8801b626b9b0 0000000000000046 00000000001d3300 0000000000000000
Feb 26 09:41:19 [kernel] [1726920.709058]  ffff8801b626b940 00000000001d2880 ffff8801b6262680 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709064]  ffff8801b626bfd8 ffff8801b626a000 00000000001d2880 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709069] Call Trace:
Feb 26 09:41:19 [kernel] [1726920.709079]  [<ffffffff81899095>] ? mutex_lock_nested+0x205/0x330
Feb 26 09:41:19 [kernel] [1726920.709083]  [<ffffffff8189841a>] schedule+0x3a/0x50
Feb 26 09:41:19 [kernel] [1726920.709087]  [<ffffffff81898fed>] mutex_lock_nested+0x15d/0x330
Feb 26 09:41:19 [kernel] [1726920.709092]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709096]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709100]  [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709106]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709111]  [<ffffffff810bb4b8>] ? sched_clock_cpu+0xa8/0x120
Feb 26 09:41:19 [kernel] [1726920.709115]  [<ffffffff810c7b3d>] ? trace_hardirqs_off+0xd/0x10
Feb 26 09:41:19 [kernel] [1726920.709119]  [<ffffffff810bb57f>] ? local_clock+0x4f/0x60
Feb 26 09:41:19 [kernel] [1726920.709122]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709127]  [<ffffffff8139be30>] ? xfs_ail_push_all+0x80/0x90
Feb 26 09:41:19 [kernel] [1726920.709132]  [<ffffffff8189b6e6>] ? _raw_spin_unlock+0x26/0x30
Feb 26 09:41:19 [kernel] [1726920.709136]  [<ffffffff813532ce>] xfs_reclaim_inodes_nr+0x2e/0x40
Feb 26 09:41:19 [kernel] [1726920.709139]  [<ffffffff8134f840>] xfs_fs_free_cached_objects+0x10/0x20
Feb 26 09:41:19 [kernel] [1726920.709144]  [<ffffffff811881f1>] prune_super+0x101/0x1b0
Feb 26 09:41:19 [kernel] [1726920.709149]  [<ffffffff8113eb45>] shrink_slab+0x165/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709153]  [<ffffffff81141fda>] kswapd+0x70a/0xa60
Feb 26 09:41:19 [kernel] [1726920.709158]  [<ffffffff810b4e30>] ? wake_up_bit+0x40/0x40
Feb 26 09:41:19 [kernel] [1726920.709161]  [<ffffffff811418d0>] ? try_to_free_pages+0x110/0x110
Feb 26 09:41:19 [kernel] [1726920.709165]  [<ffffffff810b48c6>] kthread+0xa6/0xb0
Feb 26 09:41:19 [kernel] [1726920.709173]  [<ffffffff8189b8dd>] ? retint_restore_args+0xe/0xe
Feb 26 09:41:19 [kernel] [1726920.709177]  [<ffffffff810b4820>] ? __init_kthread_worker+0x70/0x70
Feb 26 09:41:19 [kernel] [1726920.709180]  [<ffffffff8189e1b0>] ? gs_change+0xb/0xb
Feb 26 09:41:19 [kernel] [1726920.709183] 3 locks held by kswapd0/590:
Feb 26 09:41:19 [kernel] [1726920.709185]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff8113ea14>] shrink_slab+0x34/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709192]  #1:  (&type->s_umount_key#31){.+.+.+}, at: [<ffffffff8118807f>] grab_super_passive+0x4f/0xc0
Feb 26 09:41:19 [kernel] [1726920.709199]  #2:  (&pag->pag_ici_reclaim_lock){+.+.-.}, at: [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709211] INFO: task gkrellmd:2341 blocked for more than 120 seconds.
Feb 26 09:41:19 [kernel] [1726920.709214] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 09:41:19 [kernel] [1726920.709216] gkrellmd        D 0000000000000000  1552  2341      1 0x00000000
Feb 26 09:41:19 [kernel] [1726920.709221]  ffff8801b4b5d6f8 0000000000000046 ffffffff8189813a 0000000000000000
Feb 26 09:41:19 [kernel] [1726920.709226]  ffff880100000000 00000000001d2880 ffff8801b5ac9340 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709232]  ffff8801b4b5dfd8 ffff8801b4b5c000 00000000001d2880 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709237] Call Trace:
Feb 26 09:41:19 [kernel] [1726920.709240]  [<ffffffff8189813a>] ? __schedule+0x70a/0x920
Feb 26 09:41:19 [kernel] [1726920.709244]  [<ffffffff81899095>] ? mutex_lock_nested+0x205/0x330
Feb 26 09:41:19 [kernel] [1726920.709247]  [<ffffffff8189841a>] schedule+0x3a/0x50
Feb 26 09:41:19 [kernel] [1726920.709251]  [<ffffffff81898fed>] mutex_lock_nested+0x15d/0x330
Feb 26 09:41:19 [kernel] [1726920.709254]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709258]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709262]  [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709265]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709269]  [<ffffffff810bb4b8>] ? sched_clock_cpu+0xa8/0x120
Feb 26 09:41:19 [kernel] [1726920.709273]  [<ffffffff810c7b3d>] ? trace_hardirqs_off+0xd/0x10
Feb 26 09:41:19 [kernel] [1726920.709304]  [<ffffffff810bb57f>] ? local_clock+0x4f/0x60
Feb 26 09:41:19 [kernel] [1726920.709312]  [<ffffffff8139be30>] ? xfs_ail_push_all+0x80/0x90
Feb 26 09:41:19 [kernel] [1726920.709316]  [<ffffffff8189b6e6>] ? _raw_spin_unlock+0x26/0x30
Feb 26 09:41:19 [kernel] [1726920.709319]  [<ffffffff813532ce>] xfs_reclaim_inodes_nr+0x2e/0x40
Feb 26 09:41:19 [kernel] [1726920.709323]  [<ffffffff8134f840>] xfs_fs_free_cached_objects+0x10/0x20
Feb 26 09:41:19 [kernel] [1726920.709327]  [<ffffffff811881f1>] prune_super+0x101/0x1b0
Feb 26 09:41:19 [kernel] [1726920.709330]  [<ffffffff8113eb45>] shrink_slab+0x165/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709334]  [<ffffffff811414e7>] do_try_to_free_pages+0x267/0x460
Feb 26 09:41:19 [kernel] [1726920.709337]  [<ffffffff81141856>] try_to_free_pages+0x96/0x110
Feb 26 09:41:19 [kernel] [1726920.709342]  [<ffffffff81134ecd>] __alloc_pages_nodemask+0x4cd/0x820
Feb 26 09:41:19 [kernel] [1726920.709347]  [<ffffffff810bb4b8>] ? sched_clock_cpu+0xa8/0x120
Feb 26 09:41:19 [kernel] [1726920.709352]  [<ffffffff8116b141>] alloc_pages_current+0xa1/0x110
Feb 26 09:41:19 [kernel] [1726920.709356]  [<ffffffff81174335>] new_slab+0x265/0x300
Feb 26 09:41:19 [kernel] [1726920.709359]  [<ffffffff8189911c>] ? mutex_lock_nested+0x28c/0x330
Feb 26 09:41:19 [kernel] [1726920.709363]  [<ffffffff81176b5a>] __slab_alloc+0x2ca/0x540
Feb 26 09:41:19 [kernel] [1726920.709367]  [<ffffffff8118fb46>] ? getname_flags+0x36/0x270
Feb 26 09:41:19 [kernel] [1726920.709371]  [<ffffffff810bb57f>] ? local_clock+0x4f/0x60
Feb 26 09:41:19 [kernel] [1726920.709374]  [<ffffffff8118fb46>] ? getname_flags+0x36/0x270
Feb 26 09:41:19 [kernel] [1726920.709378]  [<ffffffff8117891b>] kmem_cache_alloc+0xdb/0x120
Feb 26 09:41:19 [kernel] [1726920.709381]  [<ffffffff8118fb46>] getname_flags+0x36/0x270
Feb 26 09:41:19 [kernel] [1726920.709384]  [<ffffffff8118fd8d>] getname+0xd/0x10
Feb 26 09:41:19 [kernel] [1726920.709388]  [<ffffffff81183fa8>] do_sys_open+0xc8/0x1d0
Feb 26 09:41:19 [kernel] [1726920.709391]  [<ffffffff811840cb>] sys_open+0x1b/0x20
Feb 26 09:41:19 [kernel] [1726920.709395]  [<ffffffff8189bffb>] system_call_fastpath+0x16/0x1b
Feb 26 09:41:19 [kernel] [1726920.709397] 3 locks held by gkrellmd/2341:
Feb 26 09:41:19 [kernel] [1726920.709399]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff8113ea14>] shrink_slab+0x34/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709405]  #1:  (&type->s_umount_key#31){.+.+.+}, at: [<ffffffff8118807f>] grab_super_passive+0x4f/0xc0
Feb 26 09:41:19 [kernel] [1726920.709412]  #2:  (&pag->pag_ici_reclaim_lock){+.+.-.}, at: [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709423] INFO: task nfsd:2821 blocked for more than 120 seconds.
Feb 26 09:41:19 [kernel] [1726920.709425] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 09:41:19 [kernel] [1726920.709428] nfsd            D 0000000000000000  1160  2821      2 0x00000000
Feb 26 09:41:19 [kernel] [1726920.709432]  ffff8801ae9577a0 0000000000000046 ffffffff8189813a 0000000000000000
Feb 26 09:41:19 [kernel] [1726920.709438]  ffff880100000000 00000000001d2880 ffff8801b7ae4d00 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709443]  ffff8801ae957fd8 ffff8801ae956000 00000000001d2880 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709448] Call Trace:
Feb 26 09:41:19 [kernel] [1726920.709452]  [<ffffffff8189813a>] ? __schedule+0x70a/0x920
Feb 26 09:41:19 [kernel] [1726920.709455]  [<ffffffff81899095>] ? mutex_lock_nested+0x205/0x330
Feb 26 09:41:19 [kernel] [1726920.709459]  [<ffffffff8189841a>] schedule+0x3a/0x50
Feb 26 09:41:19 [kernel] [1726920.709462]  [<ffffffff81898fed>] mutex_lock_nested+0x15d/0x330
Feb 26 09:41:19 [kernel] [1726920.709466]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709469]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709473]  [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709477]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709481]  [<ffffffff810bb4b8>] ? sched_clock_cpu+0xa8/0x120
Feb 26 09:41:19 [kernel] [1726920.709484]  [<ffffffff810c7b3d>] ? trace_hardirqs_off+0xd/0x10
Feb 26 09:41:19 [kernel] [1726920.709488]  [<ffffffff810bb57f>] ? local_clock+0x4f/0x60
Feb 26 09:41:19 [kernel] [1726920.709491]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709495]  [<ffffffff8139be30>] ? xfs_ail_push_all+0x80/0x90
Feb 26 09:41:19 [kernel] [1726920.709499]  [<ffffffff8189b6e6>] ? _raw_spin_unlock+0x26/0x30
Feb 26 09:41:19 [kernel] [1726920.709502]  [<ffffffff813532ce>] xfs_reclaim_inodes_nr+0x2e/0x40
Feb 26 09:41:19 [kernel] [1726920.709506]  [<ffffffff8134f840>] xfs_fs_free_cached_objects+0x10/0x20
Feb 26 09:41:19 [kernel] [1726920.709509]  [<ffffffff811881f1>] prune_super+0x101/0x1b0
Feb 26 09:41:19 [kernel] [1726920.709513]  [<ffffffff8113eb45>] shrink_slab+0x165/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709517]  [<ffffffff811414e7>] do_try_to_free_pages+0x267/0x460
Feb 26 09:41:19 [kernel] [1726920.709520]  [<ffffffff81141856>] try_to_free_pages+0x96/0x110
Feb 26 09:41:19 [kernel] [1726920.709524]  [<ffffffff81134ecd>] __alloc_pages_nodemask+0x4cd/0x820
Feb 26 09:41:19 [kernel] [1726920.709534]  [<ffffffff8184b07c>] svc_recv+0xec/0x900
Feb 26 09:41:19 [kernel] [1726920.709539]  [<ffffffff81088560>] ? try_to_wake_up+0x2d0/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709543]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709547]  [<ffffffff812d8d6d>] nfsd+0x9d/0x150
Feb 26 09:41:19 [kernel] [1726920.709550]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709553]  [<ffffffff810b48c6>] kthread+0xa6/0xb0
Feb 26 09:41:19 [kernel] [1726920.709557]  [<ffffffff8189e1b4>] kernel_thread_helper+0x4/0x10
Feb 26 09:41:19 [kernel] [1726920.709561]  [<ffffffff8189b8dd>] ? retint_restore_args+0xe/0xe
Feb 26 09:41:19 [kernel] [1726920.709565]  [<ffffffff810b4820>] ? __init_kthread_worker+0x70/0x70
Feb 26 09:41:19 [kernel] [1726920.709568]  [<ffffffff8189e1b0>] ? gs_change+0xb/0xb
Feb 26 09:41:19 [kernel] [1726920.709570] 3 locks held by nfsd/2821:
Feb 26 09:41:19 [kernel] [1726920.709572]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff8113ea14>] shrink_slab+0x34/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709578]  #1:  (&type->s_umount_key#31){.+.+.+}, at: [<ffffffff8118807f>] grab_super_passive+0x4f/0xc0
Feb 26 09:41:19 [kernel] [1726920.709585]  #2:  (&pag->pag_ici_reclaim_lock){+.+.-.}, at: [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709592] INFO: task nfsd:2822 blocked for more than 120 seconds.
Feb 26 09:41:19 [kernel] [1726920.709594] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 09:41:19 [kernel] [1726920.709596] nfsd            D 0000000000000000  1224  2822      2 0x00000000
Feb 26 09:41:19 [kernel] [1726920.709601]  ffff8801aea517a0 0000000000000046 00000000001d3300 0000000000000002
Feb 26 09:41:19 [kernel] [1726920.709607]  ffff8801aea51730 00000000001d2880 ffff8801b5acb9c0 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709612]  ffff8801aea51fd8 ffff8801aea50000 00000000001d2880 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709617] Call Trace:
Feb 26 09:41:19 [kernel] [1726920.709621]  [<ffffffff81899095>] ? mutex_lock_nested+0x205/0x330
Feb 26 09:41:19 [kernel] [1726920.709624]  [<ffffffff8189841a>] schedule+0x3a/0x50
Feb 26 09:41:19 [kernel] [1726920.709627]  [<ffffffff81898fed>] mutex_lock_nested+0x15d/0x330
Feb 26 09:41:19 [kernel] [1726920.709631]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709634]  [<ffffffff8135308e>] ? xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709638]  [<ffffffff8135308e>] xfs_reclaim_inodes_ag+0x2ee/0x3a0
Feb 26 09:41:19 [kernel] [1726920.709642]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709646]  [<ffffffff810bb4b8>] ? sched_clock_cpu+0xa8/0x120
Feb 26 09:41:19 [kernel] [1726920.709649]  [<ffffffff810c7b3d>] ? trace_hardirqs_off+0xd/0x10
Feb 26 09:41:19 [kernel] [1726920.709653]  [<ffffffff810bb57f>] ? local_clock+0x4f/0x60
Feb 26 09:41:19 [kernel] [1726920.709656]  [<ffffffff810c81ed>] ? lock_release_holdtime+0x3d/0x1a0
Feb 26 09:41:19 [kernel] [1726920.709660]  [<ffffffff8139be30>] ? xfs_ail_push_all+0x80/0x90
Feb 26 09:41:19 [kernel] [1726920.709664]  [<ffffffff8189b6e6>] ? _raw_spin_unlock+0x26/0x30
Feb 26 09:41:19 [kernel] [1726920.709667]  [<ffffffff813532ce>] xfs_reclaim_inodes_nr+0x2e/0x40
Feb 26 09:41:19 [kernel] [1726920.709671]  [<ffffffff8134f840>] xfs_fs_free_cached_objects+0x10/0x20
Feb 26 09:41:19 [kernel] [1726920.709674]  [<ffffffff811881f1>] prune_super+0x101/0x1b0
Feb 26 09:41:19 [kernel] [1726920.709677]  [<ffffffff8113eb45>] shrink_slab+0x165/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709681]  [<ffffffff811414e7>] do_try_to_free_pages+0x267/0x460
Feb 26 09:41:19 [kernel] [1726920.709685]  [<ffffffff81141856>] try_to_free_pages+0x96/0x110
Feb 26 09:41:19 [kernel] [1726920.709689]  [<ffffffff81134ecd>] __alloc_pages_nodemask+0x4cd/0x820
Feb 26 09:41:19 [kernel] [1726920.709694]  [<ffffffff8183e36d>] ? svc_tcp_recvfrom+0x50d/0x760
Feb 26 09:41:19 [kernel] [1726920.709698]  [<ffffffff8116b141>] alloc_pages_current+0xa1/0x110
Feb 26 09:41:19 [kernel] [1726920.709702]  [<ffffffff8184b07c>] svc_recv+0xec/0x900
Feb 26 09:41:19 [kernel] [1726920.709705]  [<ffffffff81088560>] ? try_to_wake_up+0x2d0/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709708]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709712]  [<ffffffff812d8d6d>] nfsd+0x9d/0x150
Feb 26 09:41:19 [kernel] [1726920.709715]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709718]  [<ffffffff810b48c6>] kthread+0xa6/0xb0
Feb 26 09:41:19 [kernel] [1726920.709722]  [<ffffffff8189e1b4>] kernel_thread_helper+0x4/0x10
Feb 26 09:41:19 [kernel] [1726920.709726]  [<ffffffff8189b8dd>] ? retint_restore_args+0xe/0xe
Feb 26 09:41:19 [kernel] [1726920.709729]  [<ffffffff810b4820>] ? __init_kthread_worker+0x70/0x70
Feb 26 09:41:19 [kernel] [1726920.709732]  [<ffffffff8189e1b0>] ? gs_change+0xb/0xb
Feb 26 09:41:19 [kernel] [1726920.709735] 3 locks held by nfsd/2822:
Feb 26 09:41:19 [kernel] [1726920.709737]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff8113ea14>] shrink_slab+0x34/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709757] INFO: task nfsd:2824 blocked for more than 120 seconds.
Feb 26 09:41:19 [kernel] [1726920.709759] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 09:41:19 [kernel] [1726920.709761] nfsd            D 0000000000000000  1208  2824      2 0x00000000
Feb 26 09:41:19 [kernel] [1726920.709766]  ffff8801ae065d60 0000000000000046 00000000001d3300 0000000000000000
Feb 26 09:41:19 [kernel] [1726920.709771]  ffff8801ae065cf0 00000000001d2880 ffff8801b54aa680 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709777]  ffff8801ae065fd8 ffff8801ae064000 00000000001d2880 00000000001d2880
Feb 26 09:41:19 [kernel] [1726920.709782] Call Trace:
Feb 26 09:41:19 [kernel] [1726920.709785]  [<ffffffff81899095>] ? mutex_lock_nested+0x205/0x330
Feb 26 09:41:19 [kernel] [1726920.709789]  [<ffffffff8189841a>] schedule+0x3a/0x50
Feb 26 09:41:19 [kernel] [1726920.709792]  [<ffffffff81898fed>] mutex_lock_nested+0x15d/0x330
Feb 26 09:41:19 [kernel] [1726920.709796]  [<ffffffff8184ad79>] ? svc_send+0x59/0xf0
Feb 26 09:41:19 [kernel] [1726920.709799]  [<ffffffff8184ad79>] ? svc_send+0x59/0xf0
Feb 26 09:41:19 [kernel] [1726920.709803]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709806]  [<ffffffff8184ad79>] svc_send+0x59/0xf0
Feb 26 09:41:19 [kernel] [1726920.709809]  [<ffffffff81088560>] ? try_to_wake_up+0x2d0/0x2d0
Feb 26 09:41:19 [kernel] [1726920.709813]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709816]  [<ffffffff8183cce0>] svc_process+0x120/0x150
Feb 26 09:41:19 [kernel] [1726920.709819]  [<ffffffff812d8d85>] nfsd+0xb5/0x150
Feb 26 09:41:19 [kernel] [1726920.709823]  [<ffffffff812d8cd0>] ? nfsd_shutdown+0x30/0x30
Feb 26 09:41:19 [kernel] [1726920.709826]  [<ffffffff810b48c6>] kthread+0xa6/0xb0
Feb 26 09:41:19 [kernel] [1726920.709830]  [<ffffffff8189e1b4>] kernel_thread_helper+0x4/0x10
Feb 26 09:41:19 [kernel] [1726920.709833]  [<ffffffff8189b8dd>] ? retint_restore_args+0xe/0xe
Feb 26 09:41:19 [kernel] [1726920.709837]  [<ffffffff810b4820>] ? __init_kthread_worker+0x70/0x70
Feb 26 09:41:19 [kernel] [1726920.709841]  [<ffffffff8189e1b0>] ? gs_change+0xb/0xb
Feb 26 09:41:19 [kernel] [1726920.709843] 1 lock held by nfsd/2824:

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux