Strange Software RAID Problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
Here is the problem that I am currently experiencing on a production server.  I
have a Software RAID1 array that consists of two 36GB SCSI 160 drives.  I cat'd
out /proc/mdstat and here is what was returned:

     Personalities : [raid1] 
     read_ahead 1024 sectors
     md0 : active raid1 sdb1[1]
        32507392 blocks [2/1] [_U]
      
     md1 : active raid1 sdb2[1]
        2562240 blocks [2/1] [_U]
      
     unused devices: <none>

The machine is still running, however it looks as if one of the drives has
failed.  The machine has not been running very long, and I am not 100% sure if
the RAID array was ever running on both disks.  Is there anyway to tell if a
drive has actually failed, or if the array has not been initialized.

The array was constructed during the setup procedue, and the machine boots just
fine.  Current uptime as of right now is around 29days.  There are no messages
in /var/log/messages or /var/log/secure about a drive being down, but from the
Software RAID howto it looks as if the array is not working correctly.

Any diagnostic steps that can be done without taking the machine down, as it is
currently a production level machine?

Thanks for the help,

 Peter Maag


-- 
Shrike-list mailing list
Shrike-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/shrike-list

[Index of Archives]     [Fedora Users]     [Centos Users]     [Kernel Development]     [Red Hat Install]     [Red Hat Watch]     [Red Hat Development]     [Red Hat Phoebe Beta]     [Yosemite Forum]     [Fedora Discussion]     [Gimp]     [Stuff]     [Yosemite News]

  Powered by Linux