I compiled the kernel with Ingo's CONFIG_PROVE_LOCKING and got the below at boot. Is it a problem? Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar ... MAX_LOCKDEP_SUBCLASSES: 8 ... MAX_LOCK_DEPTH: 30 ... MAX_LOCKDEP_KEYS: 2048 ... CLASSHASH_SIZE: 1024 ... MAX_LOCKDEP_ENTRIES: 8192 ... MAX_LOCKDEP_CHAINS: 16384 ... CHAINHASH_SIZE: 8192 memory used by lock dependency info: 1648 kB per task-struct memory footprint: 1680 bytes ------------------------ | Locking API testsuite: ---------------------------------------------------------------------------- [removed] ------------------------------------------------------- Good, all 218 testcases passed! | --------------------------------- Further down md: running: <sdah1><sdag1> raid1: raid set md3 active with 2 out of 2 mirrors md: ... autorun DONE. Filesystem "md1": Disabling barriers, not supported by the underlying device XFS mounting filesystem md1 Ending clean XFS mount for filesystem: md1 VFS: Mounted root (xfs filesystem). Freeing unused kernel memory: 284k freed Warning: unable to open an initial console. Filesystem "md1": Disabling barriers, not supported by the underlying device ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.22.16 #1 ------------------------------------------------------- mount/1558 is trying to acquire lock: (&(&ip->i_lock)->mr_lock/1){--..}, at: [<ffffffff80312805>] xfs_ilock+0x63/0x8d but task is already holding lock: (&(&ip->i_lock)->mr_lock){----}, at: [<ffffffff80312805>] xfs_ilock+0x63/0x8d which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&ip->i_lock)->mr_lock){----}: [<ffffffff80249fa6>] __lock_acquire+0xa0f/0xb9f [<ffffffff8024a50d>] lock_acquire+0x48/0x63 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8023c909>] down_write_nested+0x38/0x46 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff803132e8>] xfs_iget_core+0x3ef/0x705 [<ffffffff803136a2>] xfs_iget+0xa4/0x14e [<ffffffff80328364>] xfs_trans_iget+0xb4/0x128 [<ffffffff80316a57>] xfs_ialloc+0x9b/0x4b7 [<ffffffff80249fc9>] __lock_acquire+0xa32/0xb9f [<ffffffff80328d87>] xfs_dir_ialloc+0x84/0x2cd [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8023c909>] down_write_nested+0x38/0x46 [<ffffffff8032e307>] xfs_create+0x331/0x65f [<ffffffff80308163>] xfs_dir2_leaf_lookup+0x1d/0x96 [<ffffffff80338367>] xfs_vn_mknod+0x12f/0x1f2 [<ffffffff8027fb0a>] vfs_create+0x6e/0x9e [<ffffffff80282af3>] open_namei+0x1f7/0x6a9 [<ffffffff8021843d>] do_page_fault+0x438/0x78f [<ffffffff8027705a>] do_filp_open+0x1c/0x3d [<ffffffff8045bf56>] _spin_unlock+0x17/0x20 [<ffffffff80276e3d>] get_unused_fd+0x11c/0x12a [<ffffffff802770bb>] do_sys_open+0x40/0x7b [<ffffffff802095be>] system_call+0x7e/0x83 [<ffffffffffffffff>] 0xffffffffffffffff -> #0 (&(&ip->i_lock)->mr_lock/1){--..}: [<ffffffff80248896>] print_circular_bug_header+0xcc/0xd3 [<ffffffff80249ea2>] __lock_acquire+0x90b/0xb9f [<ffffffff8024a50d>] lock_acquire+0x48/0x63 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8023c909>] down_write_nested+0x38/0x46 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8032bd30>] xfs_lock_inodes+0x152/0x16d [<ffffffff8032e807>] xfs_link+0x1d2/0x3f7 [<ffffffff80249f3f>] __lock_acquire+0x9a8/0xb9f [<ffffffff80337fe5>] xfs_vn_link+0x3c/0x91 [<ffffffff80248f4a>] mark_held_locks+0x58/0x72 [<ffffffff8045a9b7>] __mutex_lock_slowpath+0x250/0x266 [<ffffffff80249119>] trace_hardirqs_on+0x115/0x139 [<ffffffff8045a9c2>] __mutex_lock_slowpath+0x25b/0x266 [<ffffffff8027f88b>] vfs_link+0xe8/0x124 [<ffffffff802822d8>] sys_linkat+0xcd/0x129 [<ffffffff8045baaf>] trace_hardirqs_on_thunk+0x35/0x37 [<ffffffff80249119>] trace_hardirqs_on+0x115/0x139 [<ffffffff8045baaf>] trace_hardirqs_on_thunk+0x35/0x37 [<ffffffff802095be>] system_call+0x7e/0x83 [<ffffffffffffffff>] 0xffffffffffffffff other info that might help us debug this: 3 locks held by mount/1558: #0: (&inode->i_mutex/1){--..}, at: [<ffffffff802800f5>] lookup_create+0x23/0x8 5 #1: (&inode->i_mutex){--..}, at: [<ffffffff8027f878>] vfs_link+0xd5/0x124 #2: (&(&ip->i_lock)->mr_lock){----}, at: [<ffffffff80312805>] xfs_ilock+0x63/0 x8d stack backtrace: Call Trace: [<ffffffff80248612>] print_circular_bug_tail+0x69/0x72 [<ffffffff80248896>] print_circular_bug_header+0xcc/0xd3 [<ffffffff80249ea2>] __lock_acquire+0x90b/0xb9f [<ffffffff8024a50d>] lock_acquire+0x48/0x63 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8023c909>] down_write_nested+0x38/0x46 [<ffffffff80312805>] xfs_ilock+0x63/0x8d [<ffffffff8032bd30>] xfs_lock_inodes+0x152/0x16d [<ffffffff8032e807>] xfs_link+0x1d2/0x3f7 [<ffffffff80249f3f>] __lock_acquire+0x9a8/0xb9f [<ffffffff80337fe5>] xfs_vn_link+0x3c/0x91 [<ffffffff80248f4a>] mark_held_locks+0x58/0x72 [<ffffffff8045a9b7>] __mutex_lock_slowpath+0x250/0x266 [<ffffffff80249119>] trace_hardirqs_on+0x115/0x139 [<ffffffff8045a9c2>] __mutex_lock_slowpath+0x25b/0x266 [<ffffffff8027f88b>] vfs_link+0xe8/0x124 [<ffffffff802822d8>] sys_linkat+0xcd/0x129 [<ffffffff8045baaf>] trace_hardirqs_on_thunk+0x35/0x37 [<ffffffff80249119>] trace_hardirqs_on+0x115/0x139 [<ffffffff8045baaf>] trace_hardirqs_on_thunk+0x35/0x37 [<ffffffff802095be>] system_call+0x7e/0x83 - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html