Christoph, I wasn't able to reproduce this bug in 2.6.39 final. There were several large xfs commits b/t my test and 2.6.39 final, so I assume one of them probably resolved the issue. If this shows up again, I'll report it. Cheers, Erez. On May 22, 2011, at 5:55 AM, Christoph Hellwig wrote: > On Sat, May 21, 2011 at 01:07:41AM -0400, Erez Zadok wrote: >> I ran racer on top of xfs in 2.6.39 and v2.6.39-2612-g0524975. Very quickly got a series of these lockdep warnings. > > There's really nothing XFS specific here, except that the old IDE driver > which seems to have an issue is called by XFS in this case. > > What's rather confusing about the trace is that schedule() should > never call into the block driver, but offload it to kblockd from > how I read the code in 2.6.39 final. It used to be different before > but got fixed during the rcs. > >> >> Cheers, >> Erez. >> >> [ 45.605295] BUG: sleeping function called from invalid context at drivers/ide/ide-io.c:468 >> [ 45.605429] in_atomic(): 1, irqs_disabled(): 0, pid: 2464, name: dir_create.sh >> [ 45.605533] 1 lock held by dir_create.sh/2464: >> [ 45.605606] #0: (&(&ip->i_lock)->mr_lock){++++..}, at: [<d20efd34>] xfs_ilock+0x4f/0x67 [xfs] >> [ 45.605848] Pid: 2464, comm: dir_create.sh Not tainted 2.6.39-linus+ #390 >> [ 45.605946] Call Trace: >> [ 45.606021] [<c10201eb>] __might_sleep+0xd5/0xdd >> [ 45.606112] [<c1160cc2>] do_ide_request+0x3a/0x514 >> [ 45.606199] [<c111a880>] ? cfq_service_tree_add+0x1de/0x241 >> [ 45.606290] [<c111a54c>] ? cfq_prio_tree_add+0x80/0x90 >> [ 45.606376] [<c1110d19>] __blk_run_queue+0x14/0x16 >> [ 45.606458] [<c111bee8>] cfq_insert_request+0x403/0x40b >> [ 45.606542] [<c111024e>] __elv_add_request+0x14c/0x17c >> [ 45.606625] [<c111267e>] blk_flush_plug_list+0x130/0x17a >> [ 45.606714] [<c11f7d0c>] schedule+0x22f/0x705 >> [ 45.606795] [<c104710c>] ? mark_held_locks+0x3d/0x58 >> [ 45.606879] [<c11fa2c5>] ? _raw_spin_unlock_irqrestore+0x36/0x59 >> [ 45.607041] [<c1047226>] ? trace_hardirqs_on_caller+0xff/0x120 >> [ 45.607195] [<c102039e>] ? get_parent_ip+0xb/0x31 >> [ 45.607337] [<c1022eef>] ? sub_preempt_count+0x74/0x8d >> [ 45.607495] [<d20f8e6d>] _xfs_log_force_lsn+0x229/0x26c [xfs] >> [ 45.607665] [<c1047252>] ? trace_hardirqs_on+0xb/0xd >> [ 45.607809] [<c102195e>] ? try_to_wake_up+0x1db/0x1db >> [ 45.607967] [<d2104b30>] _xfs_trans_commit+0x373/0x47c [xfs] >> [ 45.608137] [<d20f5c40>] xfs_iomap_write_allocate+0x221/0x301 [xfs] >> [ 45.608307] [<d210a272>] xfs_map_blocks+0x1b9/0x1cd [xfs] >> [ 45.608460] [<c10497b4>] ? __lock_acquire+0x6ff/0x76a >> [ 45.608627] [<d210b327>] xfs_vm_writepage+0x284/0x441 [xfs] >> [ 45.608782] [<c10668b7>] __writepage+0xb/0x23 >> [ 45.608922] [<c1066c28>] write_cache_pages+0x1a9/0x271 >> [ 45.609068] [<c10668ac>] ? set_page_dirty+0x5a/0x5a >> [ 45.609213] [<c1066d1e>] generic_writepages+0x2e/0x43 >> [ 45.609408] [<d210a4a1>] xfs_vm_writepages+0x3c/0x42 [xfs] >> [ 45.640167] [<d210a465>] ? xfs_aops_discard_page+0x14a/0x14a [xfs] >> [ 45.640331] [<c1066d4f>] do_writepages+0x1c/0x28 >> [ 45.640474] [<c1061a1a>] __filemap_fdatawrite_range+0x5a/0x66 >> [ 45.640629] [<c1062116>] filemap_fdatawrite_range+0x10/0x12 >> [ 45.640790] [<d210ec46>] xfs_flush_pages+0x5c/0x94 [xfs] >> [ 45.640948] [<d210921c>] xfs_release+0x107/0x1cc [xfs] >> [ 45.641103] [<d210defd>] xfs_file_release+0xd/0x11 [xfs] >> [ 45.641257] [<c10858fe>] fput+0xee/0x193 >> [ 45.641452] [<c1082dcd>] filp_close+0x57/0x61 >> [ 45.641589] [<c1090a51>] sys_dup3+0xdd/0x101 >> [ 45.641722] [<c1090af5>] sys_dup2+0x80/0x8a >> [ 45.641857] [<c11fadf0>] sysenter_do_call+0x12/0x36 >> >> _______________________________________________ >> xfs-masters mailing list >> xfs-masters@xxxxxxxxxxx >> http://oss.sgi.com/mailman/listinfo/xfs-masters > ---end quoted text--- > -- > To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html