Re: Raid 5 where 2 disks out of 4 were unplugged

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26/08/2021 10:18, Gennaro Oliva wrote:
Hello,
I have a QNAP with Linux 3.4.6 and mdadm 3.3. I have 4 drives assembled
in raid 5, two of those drives where accidentally removed and now they
are out of sync. This is a partial output of mdadm --examine

/dev/sda3:
     Update Time : Thu Jul  8 18:01:51 2021
        Checksum : 4bc8157c - correct
          Events : 469678
    Device Role : Active device 0
    Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
     Update Time : Thu Jul  8 18:01:51 2021
        Checksum : 7fac997f - correct
          Events : 469678
    Device Role : Active device 1
    Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
     Update Time : Thu Jul  8 13:15:58 2021
        Checksum : fcd5279f - correct
          Events : 469667
    Device Role : Active device 2
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
     Update Time : Thu Jul  8 13:15:58 2021
        Checksum : b9bc1e2e - correct
          Events : 469667
    Device Role : Active device 3
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

The disk are all healthy. I tried to re-assemble the drive with
mdadm --verbose --assemble --force
using various combination of 3 drives or using all the four drives but
I'm always notified I have no enough drives to start the array.

This is the output when trying to use all the drives:

mdadm --verbose --assemble --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
mdadm: looking for devices for /dev/md1
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1.
mdadm: /dev/sdc3 is identified as a member of /dev/md1, slot 2.
mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3.
mdadm: added /dev/sdb3 to /dev/md1 as 1
mdadm: added /dev/sdc3 to /dev/md1 as 2 (possibly out of date)
mdadm: added /dev/sdd3 to /dev/md1 as 3 (possibly out of date)
mdadm: added /dev/sda3 to /dev/md1 as 0
mdadm: /dev/md1 assembled from 2 drives - not enough to start the array.

The number of events is really close (11). What is my next option to
recover the partition? Do I need to rebuild the superblock?
What options should I use?

Do NOT "rebuild the superblock" whatever you mean by that. What I think you need to do is force-assemble the array. You might lose a bit of data - the first thing you will need to do after a forced assembly is to check the file system ...

The low discrepancy in the event count is a good sign, you won't lose much.

What I would suggest is you read up on the linux wiki, use overlays to test and make sure you won't lose anything, and then do the force assembly for real.

https://raid.wiki.kernel.org/index.php/Linux_Raid

Cheers,

Wol




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux