Re: [PATCH v4] md: no longer compare spare disk superblock events in super_load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2019/10/16 16:00, Yufen Yu wrote:
We have a test case as follow:

   mdadm -CR /dev/md1 -l 1 -n 4 /dev/sd[a-d] \
	--assume-clean --bitmap=internal
   mdadm -S /dev/md1
   mdadm -A /dev/md1 /dev/sd[b-c] --run --force

   mdadm --zero /dev/sda
   mdadm /dev/md1 -a /dev/sda

   echo offline > /sys/block/sdc/device/state
   echo offline > /sys/block/sdb/device/state
   sleep 5
   mdadm -S /dev/md1

   echo running > /sys/block/sdb/device/state
   echo running > /sys/block/sdc/device/state
   mdadm -A /dev/md1 /dev/sd[a-c] --run --force

When we readd /dev/sda to the array, it started to do recovery.
After offline the other two disks in md1, the recovery have
been interrupted and superblock update info cannot be written
to the offline disks. While the spare disk (/dev/sda) can continue
to update superblock info.

After stopping the array and assemble it, we found the array
run fail, with the follow kernel message:

[  172.986064] md: kicking non-fresh sdb from array!
[  173.004210] md: kicking non-fresh sdc from array!
[  173.022383] md/raid1:md1: active with 0 out of 4 mirrors
[  173.022406] md1: failed to create bitmap (-5)
[  173.023466] md: md1 stopped.

Since both sdb and sdc have the value of 'sb->events' smaller than
that in sda, they have been kicked from the array. However, the only
remained disk sda is in 'spare' state before stop and it cannot be
added to conf->mirrors[] array. In the end, raid array assemble
and run fail.

In fact, we can use the older disk sdb or sdc to assemble the array.
That means we should not choose the 'spare' disk as the fresh disk in
analyze_sbs().

To fix the problem, we do not compare superblock events when it is
a spare disk, as same as validate_super.

Signed-off-by: Yufen Yu <yuyufen@xxxxxxxxxx>

v1->v2:
   fix wrong return value in super_90_load
v2->v3:
   adjust the patch format to avoid scripts/checkpatch.pl warning
v3->v4:
   fix the bug pointed out by Song, when the spare disk is the first
   device for load_super
---
  drivers/md/md.c | 57 +++++++++++++++++++++++++++++++++++++++++++------
  1 file changed, 51 insertions(+), 6 deletions(-)


@@ -3597,7 +3632,7 @@ static struct md_rdev *md_import_device(dev_t newdev, int super_format, int supe
   * Check a full RAID array for plausibility
   */
-static void analyze_sbs(struct mddev *mddev)
+static int analyze_sbs(struct mddev *mddev)
  {
  	int i;
  	struct md_rdev *rdev, *freshest, *tmp;
@@ -3618,6 +3653,12 @@ static void analyze_sbs(struct mddev *mddev)
  			md_kick_rdev_from_array(rdev);
  		}
+ /* Cannot find a valid fresh disk */
+	if (!freshest) {
+		pr_warn("md: cannot find a valid disk\n");
+		return -EINVAL;
+	}
+
  	super_types[mddev->major_version].
  		validate_super(mddev, freshest);
@@ -3652,6 +3693,8 @@ static void analyze_sbs(struct mddev *mddev)
  			clear_bit(In_sync, &rdev->flags);
  		}
  	}
+
+	return 0;
  }
/* Read a fixed-point number.
@@ -5570,7 +5613,9 @@ int md_run(struct mddev *mddev)
  	if (!mddev->raid_disks) {
  		if (!mddev->persistent)
  			return -EINVAL;
-		analyze_sbs(mddev);
+		err = analyze_sbs(mddev);
+		if (err)
+			return -EINVAL;
  	}
if (mddev->level != LEVEL_NONE)

Since freshest can be 'NULL' when all disks are spare. For that
case we return '-EINVAL'.

Thanks,
Yufen





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux