On Mon, Apr 30, 2012 at 10:08:20AM +0200, Stefan Priebe wrote: > Hi, > > while running several systems with 3.0.30 i'm seeing this call trace > pretty often. Is this problem known? > > [925680.203973] Call Trace: > [925680.204170] [<c0610ce0>] schedule+0x30/0x50 > [925680.204221] [<c0610d71>] io_schedule+0x71/0xb0 > [925680.204273] [<c0362420>] get_request_wait+0xa0/0x150 > [925680.204376] [<c03625c6>] __make_request+0xf6/0x280 > [925680.204427] [<c036143d>] generic_make_request+0x3fd/0x610 > [925680.204741] [<c03616ba>] submit_bio+0x6a/0x100 > [925680.204894] [<c03366a7>] xfs_submit_ioend_bio+0x47/0x70 > [925680.204946] [<c03367a4>] xfs_submit_ioend+0xd4/0xe0 > [925680.204998] [<c0337ba3>] xfs_vm_writepage+0x253/0x580 > [925680.205109] [<c01ae8ca>] shrink_page_list+0x55a/0x750 > [925680.205162] [<c01aed60>] shrink_inactive_list+0x1a0/0x350 > [925680.205214] [<c01af321>] shrink_zone+0x411/0x570 > [925680.205265] [<c01af9fe>] kswapd+0x57e/0x9c0 kswapd is blocking submitting IO - the request queue is full. I'd say you've got memory pressure, lots of dirty pages and an IO subsystem that can't keep up.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs