force a raid restart?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Scenario:

  System has 2 3ware IDE raid controllers.  Controller 1 has 8 disks
installed built out as 2 software 4disk RAID5 arrays, built using
"raidtools" mkraid.

Disk 0 (first array) physically failed array running in degraded mode.
Disk 5 (second array) failed 2 days later but at this time generated
  enough IO to hang up the controller.  This appears to have caused the
  raid subsystem to mark disk1 (first array) bad also.  It's passed a
  number of disk tests and seems ok.

  As a result md0 is dead because of disk0 and disk1.  Is there a way to
force it to restart md0 since when it read disk1 dead it just stopped
the array?  The data should all be in tact, just degraded since disk0 is
physcially dead and off for RMA?  I've installed mdadm and am reading
the docs but I need this back quickly if it's possible.

  Also, we use devfs.  I've seen in the /etc/raidtab most of the time it's
built referencing /dev/hdX or /dev/sdX.  Since these move around alot is
there any reason other than length why we shouldn't use the
/dev/scsi/host...* devices?

Robert



:wq!
---------------------------------------------------------------------------
Robert L. Harris                     | PGP Key ID: FC96D405
                               
DISCLAIMER:
      These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

Attachment: pgp00015.pgp
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux