> Been folowing this with interst as just about everything I'm building > these days has raid1 to boot and data (typical small server setup), and > raid5 in larger boxes for data and ext3 ... > > No problems with this yet - several power failures and disks lost and it's > all generally behaved as I expected it to. I've hot-chanaged SCSI drives > which have failed and cold changed IDE drives at a convenient time for the > server... > > I did have a problem recently though - had a disk fail in an 8-disk > external SCSI array, arranged as a 7+1 RAID5 ... Then 5 minutes later had > a 2nd disk fail. > > So to the upper layers, ext3, userland, etc. that should look like a > catastrophic hardware failure -- anything trying to read/write to it > should (IMO) have simply returned with IO errors. That depends on the options when the filesystem was mounted. Or the options set in the superblock. The choices are continue, remount read-only or panic. Regards, Morten ---- A: No. Q: Should I include quotations after my reply? - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html