Strange behaviour on "toy array"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi all,

I'm gearing up to setting up a 2tb raid for our research group, and just to see how this stuff works I made a loopback array on one of my machines. I created 5 loopback devices of 1mb each, created a raid5 array and formatted. so far so good. I could copy files on and off, fail a disk with mdadm -f and then return it and everything seemed to work as i expected. Then I decided to see what happens if things go bad, so i fail one disk. fine, array reports "clean, degraded" but I can still access files. Then I fail another, now expecting not to be able to read anything. But, array reports "clean, degraded" and I can still access the files. I then proceeded to fail ALL disks and the array was still "clean, degraded" and I could read the files on it just as well as before??? Can anyone explain to me what's going on here? Was I seeing some cached version (given that the array was so small)?

This is on a machine running fedora core 3 ppc.

thanks,

/Patrik

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux