Re: BUG - raid 1 deadlock on handle_read_error / wait_barrier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 25 Feb 2013 09:43:50 +1100 NeilBrown <neilb@xxxxxxx> wrote:

> On Thu, 21 Feb 2013 15:58:24 -0700 Tregaron Bayly <tbayly@xxxxxxxxxxxx> wrote:
> 
> > Symptom:
> > A RAID 1 array ends up with two threads (flush and raid1) stuck in D
> > state forever.  The array is inaccessible and the host must be restarted
> > to restore access to the array.
> > 
> > I have some scripted workloads that reproduce this within a maximum of a
> > couple hours on kernels from 3.6.11 - 3.8-rc7.  I cannot reproduce on
> > 3.4.32.  3.5.7 ends up with three threads stuck in D state, but the
> > stacks are different from this bug (as it's EOL maybe of interest in
> > bisecting the problem?).
> 
> Can you post the 3 stacks from the 3.5.7 case?  It might help get a more
> complete understanding.
> 
> ...
> > Both processes end up in wait_event_lock_irq() waiting for favorable
> > conditions in the struct r1conf to proceed.  These conditions obviously
> > seem to never arrive.  I placed printk statements in freeze_array() and
> > wait_barrier() directly before calling their respective
> > wait_event_lock_irq() and this is an example output:
> > 
> > Feb 20 17:47:35 sanclient kernel: [4946b55d-bb0a-7fce-54c8-ac90615dabc1] Attempting to freeze array: barrier (1), nr_waiting (1), nr_pending (5), nr_queued (3)
> > Feb 20 17:47:35 sanclient kernel: [4946b55d-bb0a-7fce-54c8-ac90615dabc1] Awaiting barrier: barrier (1), nr_waiting (2), nr_pending (5), nr_queued (3)
> > Feb 20 17:47:38 sanclient kernel: [4946b55d-bb0a-7fce-54c8-ac90615dabc1] Awaiting barrier: barrier (1), nr_waiting (3), nr_pending (5), nr_queued (3)
> 
> This is very useful, thanks.  Clearly there is one 'pending' request that
> isn't being counted, but also isn't being allowed to complete.
> Maybe it is in pending_bio_list, and so counted in conf->pending_count.
> 
> Could you print out that value as well and try to trigger the bug again?  If
> conf->pending_count is non-zero, then it seems very likely the we have found
> the problem.

Actually  don't bother.  I think I've found the problem.  It is related to
pending_count and is easy to fix.
Could you try this patch please?

Thanks.
NeilBrown

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 6e5d5a5..fd86b37 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -967,6 +967,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
 		bio_list_merge(&conf->pending_bio_list, &plug->pending);
 		conf->pending_count += plug->pending_cnt;
 		spin_unlock_irq(&conf->device_lock);
+		wake_up(&conf->wait_barrier);
 		md_wakeup_thread(mddev->thread);
 		kfree(plug);
 		return;

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux