Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/25/2010 10:35 PM, Neil Brown wrote:
On Fri, 22 Oct 2010 11:03:59 -0700
Dan Williams<dan.j.williams@xxxxxxxxx>  wrote:

We mark the disk in-sync, and also need to init ->recovery_offset lest
we confuse older versions of mdadm that don't consider this case at
assembly (i.e. that when growing we assume the disk is insync).

mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.

Cc:<stable@xxxxxxxxxx>
Signed-off-by: Dan Williams<dan.j.williams@xxxxxxxxx>
---
  drivers/md/raid5.c |    1 +
  1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9e8ecd5..f8a27d4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
  			if (raid5_add_disk(mddev, rdev) == 0) {
  				char nm[20];
  				if (rdev->raid_disk>= conf->previous_raid_disks) {
+					rdev->recovery_offset = MaxSector;
  					set_bit(In_sync,&rdev->flags);
  					added_devices++;
  				} else


Sorry, but I'm not getting this one....

rdev->recovery_offset is only ever used when In_sync is clear.  So it makes
no sense to give it a value when In_sync is set.

Maybe there are some places where we clear In_sync that need to have
recovery_offset set to zero, but it isn't obvious to me that that would
explain your symptom.

Can you give a bit more detail of the problem you are seeing please?

mdadm -A an array growing raid disks by more than max_degraded.

Here is the related mdadm fix, but I figured it was worhtwhile to also address this in the kernel for the corner case of new kernel + old mdadm.

commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
Author: Dan Williams <dan.j.williams@xxxxxxxxx>
Date:   Mon Jul 12 12:03:13 2010 -0700

    Assemble: fix assembly in the delta_disks > max_degraded case

    Incremental assembly works on such an array because the kernel sees the
    disk as in-sync and that the array is reshaping.  Teach Assemble() the
    same assumptions.

    This is only needed on kernels that do not initialize ->recovery_offset
    when activating spares for reshape.

    Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx>

diff --git a/Assemble.c b/Assemble.c
index afd4e60..409f0d7 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
                    devices[most_recent].i.events) {
                        devices[j].uptodate = 1;
                        if (i < content->array.raid_disks) {
-                               if (devices[j].i.recovery_start == MaxSector) {
+                               if (devices[j].i.recovery_start == MaxSector ||
+                                   (content->reshape_active &&
+                                    j >= content->array.raid_disks - content->delta_disks)) {
                                        okcnt++;
                                        avail[i]=1;
                                } else

--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux