Re: lost software raid information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 23, 2019 at 03:20:02PM +0200, Volker Lieder wrote:
> sda and sdd-dsz are raid6 with 24 drives
>
> Initial build command:
> 
> mdadm --create /dev/md1 --level=6 --raid-devices=24 /dev/sda /dev/sd[d-z]

So none of the drives should have partition tables, but somehow they do. 
(All of them except for some reason sdy/sdz untouched, lucky...?)

It's safer to run RAID on partitions instead of whole drive. 
Too much software out there expects drives to be partitioned.

Did someone boot Windows on this machine...? Windows does that, 
it "helpfully" creates partition tables where there are none. 
And it irrecoverably damages mdadm metadata in the process.
 
> /dev/sda:
>    MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
> mdadm: No md superblock detected on /dev/sda1.
> mdadm: No md superblock detected on /dev/sda9.
> [    6.044546]  sda: sda1 sda9

The partition numbering is weird too - partition 1, partition 9, 
nothing in between for all of them.

Is there anything on these partitions? (file -s /dev/sd*)
Any data written to partitions likely damaged data on the RAID.

| /dev/sdz:
|           Magic : a92b4efc
|         Version : 1.2
|     Feature Map : 0x1
|      Array UUID : 44cbe821:c4e585d1:90652f85:aaaa5ec3
|            Name : archive03:1  (local to host archive03)
|   Creation Time : Fri Mar 31 13:06:57 2017
|      Raid Level : raid6                   
|    Raid Devices : 24
| 
|  Avail Dev Size : 11720783024 (5588.90 GiB 6001.04 GB)
|      Array Size : 128928611328 (122955.91 GiB 132022.90 GB)
|   Used Dev Size : 11720782848 (5588.90 GiB 6001.04 GB)
|     Data Offset : 262144 sectors                      
|    Super Offset : 8 sectors
|    Unused Space : before=262056 sectors, after=176 sectors
|           State : clean
|     Device UUID : 9044d88c:bce3d9ab:0360b279:07c60455
| 
| Internal Bitmap : 8 sectors from superblock
|     Update Time : Mon Apr 22 18:52:31 2019 
|   Bad Block Log : 512 entries available at offset 72 sectors
|        Checksum : b0d0445c - correct                        
|          Events : 437007
| 
|          Layout : left-symmetric
|      Chunk Size : 512K          
| 
|    Device Role : Active device 23
|    Array State : AAAAAAAAAAAAAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)

So assuming this is all correct and none of the drives were bad, 
and the drive order did not change, the create command should be:

mdadm --create /dev/md1 --assume-clean \
    --metadata=1.2 --data-offset=262144s \
    --level=6 --chunk=512K --layout=ls \
    --raid-devices=24 /dev/sda /dev/sd[d-z]

However it would be better to run this command on an overlay instead, 
and check things out without actually writing to these drives first, 
the drive order or any other number of assumptions may be wrong here.

https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

If it actually works, the array will have a new UUID so mdadm.conf 
and initramfs have to be updated as well. While you could re-create 
it with the same UUID, it's usually safer not to.

Good luck
Andreas Klauer



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux