Re: raid5 in degraded mode; trying to revive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday December 6, search.lists@xxxxxxxxx wrote:
> Hello, I have been trying to figure out how to fix my raid system,
> SUSE 9.3, linux 2.6.11.4-21.9-default. A hard reset put my raid in
> unstable state, almost same errors as
> http://sumo.genetics.ucla.edu/pipermail/nelsonlab-dev/2004-August/000150.html
> except, sda1 seems to be the problem.
> md: kicking non-fresh sda1 from array!
> 
> I did
> mdadm -A -f /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1
> and then /proc/mdstat shows the 2 disk raid active, but I could not
> mount it, mounting /dev/md0 hangs.  Later I tried
> mdadm --assemble /dev/md0 --force /dev/sda1 /dev/sdb1 /dev/sdc1
> same thing.
> 
> If the raid is ok in degraded (missing 1 drive) mode, shouldn't I be
> able to mount it?

Yes you should.  
The fact that you cannot tends to suggest something wrong at a
hardware level, but it is hard to be sure.

Can you get any kernel messages between assembling the array and the
mount hanging?
If you could
   echo t > /proc/sysrq_trigger

and capture the output
   dmesg > /some/file

that might also be helpful.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux