Re: Recovering RAID set after OS disk failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Davide,

Do you expect a partition table or do you use the whole disk as the fs (or whatever
higher layers you have)?

What does your /etc/fstab entry say?

Did you try a simple
	sudo mount /dev/md0

You do not say if you rebooted (to let initrd) do its thing.

Eyal

On 06/03/14 15:49, Davide Guarisco wrote:
Peter, thanks for your help. Below are the answers.


On Jun 2, 2014, at 05:36, Kővári Péter <peter@xxxxxxxxxxxxxx> wrote:

Hi Davide,

Open / ssh a console on your NAS box, and issue the following command and send us the results:
$ cat /proc/mdstat


Personalities :
unused devices: <none>




Please also issue the following commands

mdadm --examine /dev/sdX[Y]

$ mdadm —examine /dev/sdb
mdadm: cannot open /dev/sdb: Permission denied


$ sudo mdadm —examine /dev/sdb
/dev/sdb:
	MBR Magic : aa55
Partition[0] : 1953520002 sectors at       63 (type fd)


$ sudo mdadm —examine /dev/sdb1
/dev/sdb1:
           Magic : a92b4efc
         Version : 0.90.00
            UUID : f8a943c7:2ffa13d0:9770de34:eca2e81c (local to host gecko)
   Creation Time : Tue Mar  3 23:27:50 2009
      Raid Level : raid5
   Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
      Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
    Raid Devices : 4
   Total Devices : 4
Preferred Minor : 0

     Update Time : Wed May 28 21:52:54 2014
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0
        Checksum : 2d5185d8 - correct
          Events : 46

          Layout : left-symmetric
      Chunk Size : 128K

       Number   Major   Minor   RaidDevice State
this     0       8       17        0      active sync   /dev/sdb1

    0     0       8       17        0      active sync   /dev/sdb1
    1     1       8       33        1      active sync   /dev/sdc1
    2     2       8       49        2      active sync   /dev/sdd1
    3     3       8       65        3      active sync   /dev/sde1


mdadm --examine /dev/sdc1

/dev/sdc1:
           Magic : a92b4efc
         Version : 0.90.00
            UUID : f8a943c7:2ffa13d0:9770de34:eca2e81c (local to host gecko)
   Creation Time : Tue Mar  3 23:27:50 2009
      Raid Level : raid5
   Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
      Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
    Raid Devices : 4
   Total Devices : 4
Preferred Minor : 0

     Update Time : Wed May 28 21:52:54 2014
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0
        Checksum : 2d5185ea - correct
          Events : 46

          Layout : left-symmetric
      Chunk Size : 128K

       Number   Major   Minor   RaidDevice State
this     1       8       33        1      active sync   /dev/sdc1

    0     0       8       17        0      active sync   /dev/sdb1
    1     1       8       33        1      active sync   /dev/sdc1
    2     2       8       49        2      active sync   /dev/sdd1
    3     3       8       65        3      active sync   /dev/sde1


…etc. So it seems to me that we are OK, with the RAID 5 set setup on /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1.


Where X is one of the raid drive's name and Y is the partition number, if you created the raid set on partitions. (If not, then leave the number.) So, for example (assuming that your OS drive is /dev/sda, so your raid drives are /dev/sdb, /dev/sdc and so on) issue the following commands:

$ mdadm --examine /dev/sdb
or
$ mdadm --examine /dev/sdb1

and so on for all 4 drives. And send back the results.

p.s.
Before everything else,  you might try  auto assembling th eset by:
$ mdadm -v --assemble —scan

Trying this holding my breath….

mdadm -v --assemble --scan

mdadm: looking for devices for /dev/md0
mdadm: no recogniseable superblock on /dev/dm-1
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: no RAID superblock on /dev/sde
mdadm: no RAID superblock on /dev/sdd
mdadm: no RAID superblock on /dev/sdb
mdadm: no RAID superblock on /dev/sdc
mdadm: no RAID superblock on /dev/sda5
mdadm: no RAID superblock on /dev/sda2
mdadm: no RAID superblock on /dev/sda1
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/sdc1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdb1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 4 drives.


OK, this seems successful as well. My RAID is /dev/md0.





It might assemble your raid set for you successfully out of the box.  (If not, send here the output.)
If this assembles your set successfully, then you just need to save your config in /etc/mdam/mdadm.conf, do an initramfs update and you are good to go.
So to save the config issue:
$ mdadm --examine --scan >> /etc/mdadm/mdadm.conf

cat /etc/mdadm/mdadm.conf

ARRAY /dev/md0 UUID=f8a943c7:2ffa13d0:9770de34:eca2e81c




then update initramfs so th eset will auto assmble on next boot:
$ update-initramfs -k all -u

update-initramfs -k all -u

update-initramfs: Generating /boot/initrd.img-3.13.0-24-generic



But now:

sudo fdisk -l

Disk /dev/md0: 3000.6 GB, 3000606523392 bytes
2 heads, 4 sectors/track, 732569952 cylinders, total 5860559616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 393216 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn’t contain a valid partition table



How do I fix this and how to I gain access to /dev/md0?

Thanks,
	Davide







--
Eyal Lebedinsky (eyal@xxxxxxxxxxxxxx)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux