Re: Strange behaviour on "toy array"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ruth Ivimey-Cook wrote:


Yes, I believe this interpretation is correct. Moreover, I've seen this happen "for real": when two drives died on my raid5 array while I was playing, I started to see some I/O errors, but only for things that hadn't just been accessed: recently accessed things were returned fine. As time went by, even those disappeared.

I must admit, it's rather disconcerting, but it is a logical result of having a
block cache.


This makes sense, however, I would have expected /proc/mdstat or something telling me the array is DEAD. It seems "clean, degraded" is not a proper description of a raid5 without any working drives... Or would this not happen until I tried to write to it (which I haven't gotten to yet)?

I must admit I don't remember seeing in the FAQ or anywhere what is supposed to happen when you lose more than one drive. I sort of expected to have the entire array go offline, but it seems it just limps along like a normal faulty drive would do?

/Patrik

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux