Re: strange problem with my raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think the normal thing to try in this situation is:

 mdadm --assemble --scan

and if that doesn't work, people normally ask for:
 mdadm -E /dev/sd?? for each appropriate drive which should be in the array

have a look at dmesg too ?

I don't know much about md, I just lurk so apologies if you already know this.

cheers
Simon

On 30/03/2011 13:34, hank peng wrote:
Hi,all:
I created a raid5 array which consists of 15 disks, before recovering
is done, a power failure event occured. After power is recovered, the
machine box started successfully but "cat /proc/mdstat" gave no
message, previously created raid5 was gone. I check kernel messages,
it is as follows:

<snip>
bonding: bond0: enslaving eth1 as a backup interface with a down link.
svc: failed to register lockdv1 RPC service (errno 97).
rpc.nfsd used greatest stack depth: 5440 bytes left
md: md1 stopped.
iSCSI Enterprise Target Software - version 1.4.1
</snip>

In normal case, md1 should bind its disks after printing "md: md1
stopped", then what happened in this cituation?
BTW, my kernel version is 2.6.31.6.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux