Re: Assemble-Resize-Stop loop doesn't work correctly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 09 Oct 2012 17:57:59 +0200 Sebastian Riemer
<sebastian.riemer@xxxxxxxxxxxxxxxx> wrote:

> Hi Neil,
> 
> I've tested with the following script grow with "--assume-clean" in a
> loop. The first grow succeeds - following grows fail - mdadm 3.2.5,
> kernel 3.4.10.
> 
> 
> #!/bin/bash
> 
> FIRST="/dev/sda"
> SECON="/dev/sdd"
> MDDEV="/dev/md0"
> SIZE=1
> 
> mdadm --zero-superblock $FIRST
> mdadm --zero-superblock $SECON
> echo y | mdadm -C $MDDEV -e 1.2 \
> --assume-clean -z "${SIZE}G" --force -l 1 -n 2 $FIRST $SECON
> sleep 3
> mdadm -S $MDDEV
> for ((i=0; i<4; i++)); do
>   mdadm -A $MDDEV $FIRST $SECON
>   let "SIZE++"
>   mdadm -G $MDDEV -z ${SIZE}G --assume-clean
>   cat /proc/mdstat
> #  mdadm -D $MDDEV > /dev/null
>   mdadm -S $MDDEV
> done
> 
> 
> Output looks like this:
> 
> mdadm: /dev/md0 has been started with 2 drives.
> mdadm: component size of /dev/md0 has been set to 2097152K
> Personalities : [raid1]
> md0 : active raid1 sda[0] sdd[1]
>       2097152 blocks super 1.2 [2/2] [UU]
>      
> unused devices: <none>
> mdadm: stopped /dev/md0
> mdadm: /dev/md0 has been started with 2 drives.
> mdadm: /dev/md0 is performing resync/recovery and cannot be reshaped
> Personalities : [raid1]
> md0 : active raid1 sda[0] sdd[1]
>       2097152 blocks super 1.2 [2/2] [UU]
>       [==========>..........]  resync = 50.0% (1050624/2097152)
> finish=8.4min speed=2048K/sec
> 
> 
> Now the output with "Detail"-Mode after resize:
> 
> mdadm: /dev/md0 has been started with 2 drives.
> mdadm: component size of /dev/md0 has been set to 2097152K
> Personalities : [raid1]
> md0 : active raid1 sda[0] sdd[1]
>       2097152 blocks super 1.2 [2/2] [UU]
>      
> unused devices: <none>
> mdadm: stopped /dev/md0
> mdadm: /dev/md0 has been started with 2 drives.
> mdadm: component size of /dev/md0 has been set to 3145728K
> Personalities : [raid1]
> md0 : active raid1 sda[0] sdd[1]
>       3145728 blocks super 1.2 [2/2] [UU]
> 
> 
> This one works. Is this wanted behaviour?
> 
> Cheers,
> Sebastian


You've hit an unlikely corner-case there.  Thanks.

This patch fixes it.

NeilBrown

From 2225a657ce9fb4a5390a4a82c03e6a0f937b4327 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@xxxxxxx>
Date: Thu, 11 Oct 2012 11:41:14 +1100
Subject: [PATCH] md: make sure manual changes to recovery checkpoint are
 saved.

If you make an array bigger but suppress resync of the new region with
  mdadm --grow /dev/mdX --size=max --assume-clean

then stop the array before anything is written to it, the effect of
the "--assume-clean" is lost and the array will resync the new space
when restarted.
So ensure that we update the metadata in the case.

Reported-by: Sebastian Riemer <sebastian.riemer@xxxxxxxxxxxxxxxx>
Signed-off-by: NeilBrown <neilb@xxxxxxx>

diff --git a/drivers/md/md.c b/drivers/md/md.c
index e868f0c..dff013a 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -3819,6 +3819,8 @@ resync_start_store(struct mddev *mddev, const char *buf, size_t len)
 		return -EINVAL;
 
 	mddev->recovery_cp = n;
+	if (mddev->pers)
+		set_bit(MD_CHANGE_CLEAN, &mddev->flags);
 	return len;
 }
 static struct md_sysfs_entry md_resync_start =

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux