Re: MD or MDADM bug?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--Am Donnerstag, 1. September 2005 17:26 -0400 "David M. Strang" <dstrang@xxxxxxxxxxxxxx> schrieb:

The problem is; my array is now 26 of 28 disks -- /dev/sdm *IS* bad; it
[...]
What can I do? I don't believe this is working as intended.

Maybe you didn't notice that but there are two recent threads that reports nearly the same problem. Look at the posts:

08.08.2005: How to recover a multiple raid5 disk failure with mdadm?
30.08.2005: 2 partition kicked from 6 raid5

And as far as I can see there is no solution yet. Maybe it's faster to restore the data from the backup instead of hoping someone can help you. But I really think a howto with "how to replace a multiple raid5 disk failure" is badly needed. Sometimes it happens that there is a bad cable or a problem on a resync. And the users knows that the data is good or at least most of the files are.

--
Claas Hilbrecht
http://www.jucs-kramkiste.de


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux