Re: Multiple disk failure, but slot numbers are corrupt and preventing assembly.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On 4/23/07, David Greaves <david@xxxxxxxxxxxx> wrote:
There is some odd stuff in there:

/dev/sda1:
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Events : 0.115909229

/dev/sdb1:
Active Devices : 5
Working Devices : 4
Failed Devices : 1
Events : 0.115909230

/dev/sdc1:
Active Devices : 8
Working Devices : 8
Failed Devices : 1
Events : 0.115909230

/dev/sdd1:
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Events : 0.115909230

but your event counts are consistent. It looks like corruption on 2 disks :(

Exactly.

Or did you try some things?

We tried updating the superblocks. It did not help, there remains
corrupt data somehow:

[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
[...]
    Number   Major   Minor   RaidDevice State
this -11221199   -1288577935    -1551230943    2035285809      faulty
active removed
[...]

[root@localhost ~]# mdadm --examine /dev/sdc1
/dev/sdc1:
[...]
    Number   Major   Minor   RaidDevice State
this 1038288281   293191225    29538921    -2128142983      faulty
active write-mostly
[...]

That seems exactly what mdadm barfs on:

[root@localhost ~]# mdadm -v --assemble --scan --config=/tmp/mdadm.conf --force
[...]
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: no uptodate device for slot 2 of /dev/md0
[...]

Regards,

Leon.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux