Re: Degraded RAID-6 array reassembling as a RAID-0 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/4/19 6:44 am, Mark Wagner wrote:
I've got a server with a five-disk RAID-6 array.  Last night, power to
the server went out, and when I powered it back up, one of the disks
failed.  During bootup, the RAID array was assembled as an inactive
four-disk RAID-0 array with four spares rather than the expected
degraded RAID-6, and manually assembling the array doesn't give a
different result.  Any idea how I can get my array back?  There
doesn't appear to be anything wrong with the four remaining disks, it
simply isn't working.

Steps I used to manually assemble the array:

# mdadm --stop /dev/md2

# mdadm --assemble --verbose /dev/md2 /dev/sd[abcd]
mdadm: looking for devices for /dev/md2
mdadm: /dev/sda is identified as a member of /dev/md2, slot 1.
mdadm: /dev/sdb is identified as a member of /dev/md2, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md2, slot 3.
mdadm: /dev/sdd is identified as a member of /dev/md2, slot 4.
mdadm: added /dev/sda to /dev/md2 as 1
mdadm: no uptodate device for slot 2 of /dev/md2
mdadm: added /dev/sdc to /dev/md2 as 3
mdadm: added /dev/sdd to /dev/md2 as 4
mdadm: added /dev/sdb to /dev/md2 as 0
mdadm: /dev/md2 assembled from 4 drives - need all 5 to start it (use
--run to insist).

Did you do what the log told you to do?

mdadm --stop /dev/md2

mdadm --assemble --verbose --run /dev/md2 /dev/sd[abcd]

mdadm --assemble --help says this:
  --run         -R   : Try to start the array even if not enough devices
                       for a full array are present

Regards,
Adam


The results of the above steps:

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sdc[5](S) sdd[7](S) sda[9](S) sdb[8](S)
       11721062240 blocks super 1.2

unused devices: <none>

# mdadm --detail /dev/md2
/dev/md2:
            Version : 1.2
         Raid Level : raid0
      Total Devices : 4
        Persistence : Superblock is persistent

              State : inactive
    Working Devices : 4

               Name : elephant:2  (local to host elephant)
               UUID : bf34e086:5a85e77b:a2bbe941:b399e756
             Events : 17940326

     Number   Major   Minor   RaidDevice

        -       8       32        -        /dev/sdc
        -       8        0        -        /dev/sda
        -       8       48        -        /dev/sdd
        -       8       16        -        /dev/sdb



# mdadm --examine /dev/sda (the other three drives are identical
except for UUIDs and checksums)
/dev/sda:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x1
      Array UUID : bf34e086:5a85e77b:a2bbe941:b399e756
            Name : elephant:2  (local to host elephant)
   Creation Time : Wed Jan 30 07:38:18 2013
      Raid Level : raid6
    Raid Devices : 5

  Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
      Array Size : 8790795264 (8383.56 GiB 9001.77 GB)
   Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
     Data Offset : 2048 sectors
    Super Offset : 8 sectors
    Unused Space : before=1960 sectors, after=944 sectors
           State : clean
     Device UUID : 70335f31:bac3b30a:4b413322:16d0ca05

Internal Bitmap : 8 sectors from superblock
     Update Time : Wed Apr 10 16:35:51 2019
   Bad Block Log : 512 entries available at offset 72 sectors
        Checksum : f724515a - correct
          Events : 17940326

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 1
    Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

Relevant (lack of) dmesg output:

# dmesg|grep md
[    2.453207] md: Waiting for all devices to be available before autodetect
[    2.457392] md: If you don't use raid, use raid=noautodetect
[    2.461771] md: Autodetecting RAID arrays.
[    2.465981] md: autorun ...
[    2.470140] md: ... autorun DONE.
[ 5840.029700] md: md2 stopped.
[ 6307.979894] md: md2 stopped.
[ 6307.981624] md: md2 stopped.
[ 6501.517031] md: md2 stopped.
[ 7028.706986] md: md2 stopped.
[ 7028.709221] md: md2 stopped.

# uname -a
Linux elephant 4.14.42-gentoo #2 SMP Mon May 21 23:25:01 UTC 2018
x86_64 Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz GenuineIntel
GNU/Linux

# mdadm --version
mdadm - v4.1 - 2018-10-01

--
Mark


--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux