Re: Help needed recovering from raid failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil,

thanks. I followed your instructions (slightly modified as my version of mdadm did not support the --data-offset stanza). /dev/sdd was the 3rd drive and I had physically removed the 4th drive from my server.

I managed to restart the array. Then I replaced the failing drive, created partitions the same as on /dev/sda and added it to the two arrays.

It is now rebuilding for the data array, and will be done in 440 minutes.... It appears that I've lost nothing important...

One question: I did spot that the Array UUID has changed on the Create command. Is there any way of getting it back to the old value ?

Peter


> 
> Before doing this, double check that the names have changed, so check that
>  mdadm --examine /dev/sda2
> shows
>>     Array UUID : 1f28f7bb:7b3ecd41:ca0fa5d1:ccd008df
>>   Device Role : Active device 0
> 
> (among other info) and  that 
>  mdadm --exmaine /dev/sdb2
> show the same Array UUID and
>>   Device Role : Active device 1
> 
> 
> Then run
> 
> mdadm -C /dev/md1 -l5 -n4 --data-offset=262144s --metadata=1.2 --assume-clean \
>  /dev/sda2 /dev/sdb2 missing /dev/sde2


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux