Re: raid5 missing disks during chunk size grow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 22 Sep 2014 13:55:24 +0200 Martin Senebald <martin@xxxxxxxxxxx> wrote:

> Am 22.09.2014 um 13:44 schrieb Mikael Abrahamsson <swmike@xxxxxxxxx>:
> 
> > On Mon, 22 Sep 2014, Martin Senebald wrote:
> > 
> >> My idea was, first bringing back the disks in the array then continue the grow (using the backup file)
> >> add / re-add disk was not working, so i assumed recreate the array with —assume-clean would bring me closer.
> > 
> > Was this an idea you had that you didn't do, or did you actually execute on it?
> 
> I did .. 

oops. Though maybe I should say OOOPS.

Part of your array had one chunk size, part of the array had another.  By
using "create" you had to choose one chunk size or the other.  Obviously
neither is correct of the whole device.

You are now in a situation where you have made a mess and you need to somehow
recover your data.  It is all there, but how patient and careful can you be?

By far the safest approach would be to find some other storage solution into
which you can restore all the data.  Then you can try to restore the data
there and see if it look OK.

There are three sections  to the data:

 1/ the early part of the array which has been reshaped to the new chunk size.
 2/ the part of the array which is stored in the backup file.
 3/ the late part of the array which has not been reshaped yet.


Depending on which chunksize you used when you created the array (and
assuming the the newly created array has the same data-offset as the old
array), then either '1' or '3' should be available directly in the newly
created array.  Calculating the exact start and size requires care.  I
suggest you try to work it out and I can check your calculations.

If you copy that out, then 'create' the array with the other chunk size you
should be able to copy the other large section.

Getting the data out of the backup file might require careful reading of a
hex dump of that file to read the 'superblock' to find out exactly what is
store there and  where.  It shouldn't be difficult but does need care.

If  you do go down this path, please feel free to ask for more specifics and
ask me to check your calculations.

For future reference "--assemble --force" is your friend.

NeilBrown

> 
> > 
> >> What would be the best way to tackle this problem?
> > 
> > Send mdadm --examine from all 6 component drives to the list and let's take it from there.
> 
> the current state of the disks:
> 
> https://gist.github.com/daquan/94239614fc3b67789c9a#file-current-state
> 
> the state before the —create 
> 
> https://gist.github.com/daquan/94239614fc3b67789c9a#file-before-create-assume-clean
> 
> 
> > Under no circumstances do --create on the components.
> 
> that sounds not so promising anymore :-/
> 
> > 
> > What kernel version and mdadm version do you have?
> > 
> > -- 
> > Mikael Abrahamsson    email: swmike@xxxxxxxxx
> > 
> 
> 
> BR Martin
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux