My guess is it will not change state until it needs to access a disk. So, try some writes! > -----Original Message----- > From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid- > owner@xxxxxxxxxxxxxxx] On Behalf Of Patrik Jonsson > Sent: Monday, May 16, 2005 5:54 PM > To: Ruth Ivimey-Cook; linux-raid@xxxxxxxxxxxxxxx > Subject: Re: Strange behaviour on "toy array" > > Ruth Ivimey-Cook wrote: > > > > >Yes, I believe this interpretation is correct. Moreover, I've seen this > happen > >"for real": when two drives died on my raid5 array while I was playing, I > >started to see some I/O errors, but only for things that hadn't just been > >accessed: recently accessed things were returned fine. As time went by, > even > >those disappeared. > > > >I must admit, it's rather disconcerting, but it is a logical result of > having a > >block cache. > > > > > This makes sense, however, I would have expected /proc/mdstat or > something telling me the array is DEAD. It seems "clean, degraded" is > not a proper description of a raid5 without any working drives... Or > would this not happen until I tried to write to it (which I haven't > gotten to yet)? > > I must admit I don't remember seeing in the FAQ or anywhere what is > supposed to happen when you lose more than one drive. I sort of expected > to have the entire array go offline, but it seems it just limps along > like a normal faulty drive would do? > > /Patrik > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html