Re: [PATCH] [md] raid5: check faulty flag for array status during recovery.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil, You are absolutely right we need RCU lock for this. Thank you so much!

Eric

On 2015-02-19 2:51 PM, NeilBrown wrote:
On Tue, 6 Jan 2015 15:24:24 -0700 Eric Mei <meijia@xxxxxxxxx> wrote:

Hi Neil,

In a MDRAID derived work we found and fixed a data corruption bug. We think this also affect vanilla MDRAID, but we didn’t directly prove that by constructing a test to show the corruption. Following is the theoretical analysis, please kindly review and see if I missed something.

To rebuild a stripe, MD checks whether array will be optimal after rebuild complete, if that’s true, we’ll mark the WIB bit to be cleared, the purpose is to enable “incremental rebuild”. The code section is like this:

	/* Need to check if array will still be degraded after recovery/resync
	 * We don't need to check the 'failed' flag as when that gets set,
	 * recovery aborts.
	 */
	for (i = 0; i < conf->raid_disks; i++)
		if (conf->disks[i].rdev == NULL)
			still_degraded = 1;

The problem is that only checking rdev == NULL might not be enough. Suppose both 2 drives D0 and D1 failed and marked as Faulty; We immediately removed D0 from array, but because some lingering IO on D1, it remains in array with Faulty flags on. A new drive pulled in, rebuild against D0 starts. Now because no rdev is NULL, MD thinks array will be optimal. If some writes happened before rebuild reaches the region, their dirty bits in WIB will be cleared. When later add D1 back into array, we’ll skip rebuilding those stripes, thus data corruption.

The attached patch (against 3.18.0-rc6) is supposed to fix this issue.

Thanks
Eric

Hi Eric,
  sorry for the delay, and thanks for the reminder...

The issue you described could only affect RAID6 as it requires the array to
continue with two failed drives.

However in the RAID6 case I think you are correct - there is a chance of
corruption if there is a double failure and a delay in removing one device.

Your patch isn't quite safe as conf->disks[i].rdev can become NULL at any
moment, so it could become NULL between testing and de-referencing.
So I've modified it as follows.

Thanks,
NeilBrown



Author: Eric Mei <eric.mei@xxxxxxxxxxx>
Date:   Tue Jan 6 09:35:02 2015 -0800

     raid5: check faulty flag for array status during recovery.
When we have more than 1 drive failure, it's possible we start
     rebuild one drive while leaving another faulty drive in array.
     To determine whether array will be optimal after building, current
     code only check whether a drive is missing, which could potentially
     lead to data corruption. This patch is to add checking Faulty flag.
Signed-off-by: NeilBrown <neilb@xxxxxxx>

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index bc6d7595ad76..022a0d99e110 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5120,12 +5120,17 @@ static inline sector_t sync_request(struct mddev *mddev, sector_t sector_nr, int
  		schedule_timeout_uninterruptible(1);
  	}
  	/* Need to check if array will still be degraded after recovery/resync
-	 * We don't need to check the 'failed' flag as when that gets set,
-	 * recovery aborts.
+	 * Note in case of > 1 drive failures it's possible we're rebuilding
+	 * one drive while leaving another faulty drive in array.
  	 */
-	for (i = 0; i < conf->raid_disks; i++)
-		if (conf->disks[i].rdev == NULL)
+	rcu_read_lock();
+	for (i = 0; i < conf->raid_disks; i++) {
+		struct md_rdev *rdev = ACCESS_ONCE(conf->disks[i].rdev);
+
+		if (rdev == NULL || test_bit(Faulty, &rdev->flags))
  			still_degraded = 1;
+	}
+	rcu_read_unlock();
bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks, still_degraded);

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux