Hello, I have a Linux box where block devices are layered in this hierarchy: dm-crypt over LVM2 over software-RAID5 over LVM over harddisks. Since that hierarchy was used, there were occassionally crashes, whose probability at a certain time seems to be proportional to the amount of I/O done. Now one disk of the RAID5 showed block errors and I tried to copy large amounts of data using pvmove to other block devices, but without success, the machine crashes before the copying finishes. Now I upgraded to linux-2.6.20-rc1 and this yields error messages when accessing the block device hierarchy: <4>[ 37.732000] ============================================= <4>[ 37.732000] [ INFO: possible recursive locking detected ] <4>[ 37.732000] 2.6.20-rc1-nopreempt #21 <4>[ 37.732000] --------------------------------------------- <4>[ 37.732000] kcryptd/0/755 is trying to acquire lock: <4>[ 37.732000] (&md->io_lock){----}, at: [<c02e3bc8>] dm_request+0x18/0xf0 <4>[ 37.732000] <4>[ 37.732000] but task is already holding lock: <4>[ 37.732000] (&md->io_lock){----}, at: [<c02e3bc8>] dm_request+0x18/0xf0 <4>[ 37.732000] <4>[ 37.732000] other info that might help us debug this: <4>[ 37.732000] 1 lock held by kcryptd/0/755: <4>[ 37.732000] #0: (&md->io_lock){----}, at: [<c02e3bc8>] dm_request+0x18/0xf0 <4>[ 37.732000] <4>[ 37.732000] stack backtrace: <4>[ 37.732000] [<c01042ee>] dump_trace+0x1ce/0x200 <4>[ 37.732000] [<c010433a>] show_trace_log_lvl+0x1a/0x30 <4>[ 37.732000] [<c0104972>] show_trace+0x12/0x20 <4>[ 37.732000] [<c0104a26>] dump_stack+0x16/0x20 <4>[ 37.732000] [<c013a3e5>] __lock_acquire+0xb35/0xe00 <4>[ 37.732000] [<c013a72e>] lock_acquire+0x7e/0xb0 <4>[ 37.732000] [<c01352aa>] down_read+0x3a/0x50 <4>[ 37.732000] [<c02e3bc8>] dm_request+0x18/0xf0 <4>[ 37.732000] [<c0208aed>] generic_make_request+0x1cd/0x380 <4>[ 37.732000] [<c02d1563>] make_request+0x553/0x5f0 <4>[ 37.732000] [<c0208aed>] generic_make_request+0x1cd/0x380 <4>[ 37.732000] [<c02e2bbf>] __map_bio+0xcf/0x130 <4>[ 37.732000] [<c02e36b1>] __split_bio+0x171/0x430 <4>[ 37.732000] [<c02e3c5a>] dm_request+0xaa/0xf0 <4>[ 37.732000] [<c0208aed>] generic_make_request+0x1cd/0x380 <4>[ 37.732000] [<c02e9cbf>] kcryptd_do_work+0x27f/0x3e0 <4>[ 37.732000] [<c012e9c5>] run_workqueue+0xf5/0x1d0 <4>[ 37.732000] [<c012f09a>] worker_thread+0x13a/0x160 <4>[ 37.732000] [<c0131cf3>] kthread+0xa3/0xd0 <4>[ 37.732000] [<c0103d0f>] kernel_thread_helper+0x7/0x18 <4>[ 37.732000] ======================= <3>[ 37.733000] BUG: workqueue leaked lock or atomic: kcryptd/0/0x00000000/755, in_atomic()=0, lockdep_depth(current)=1, previous_lockdep_depth=0. <3>[ 37.734000] last function: kcryptd_do_work+0x0/0x3e0 <4>[ 37.734000] 1 lock held by kcryptd/0/755: <4>[ 37.734000] #0: (&md->io_lock){----}, at: [<c02e3bc8>] dm_request+0x18/0xf0 <4>[ 37.734000] [<c01042ee>] dump_trace+0x1ce/0x200 <4>[ 37.734000] [<c010433a>] show_trace_log_lvl+0x1a/0x30 <4>[ 37.734000] [<c0104972>] show_trace+0x12/0x20 <4>[ 37.734000] [<c0104a26>] dump_stack+0x16/0x20 <4>[ 37.734000] [<c012ea65>] run_workqueue+0x195/0x1d0 <4>[ 37.734000] [<c012f09a>] worker_thread+0x13a/0x160 <4>[ 37.734000] [<c0131cf3>] kthread+0xa3/0xd0 <4>[ 37.735000] [<c0103d0f>] kernel_thread_helper+0x7/0x18 <4>[ 37.735000] ======================= As you can see, dm_request() finally calls itself, leading to a double acquisition of the same semaphore &md->io_lock, while double acquisition of the same semaphore is not supported by standard Linux kernel semaphores. The semaphore should also be double freed and thus freed to early. This might explain my I/O-sensitive crashes I suffer. How can this bug be solved?
Merry Christmas! Xuân. |
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel