Re: Strange behaviour on "toy array"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrik Jonsson wrote:

> hi all,
>
> I'm gearing up to setting up a 2tb raid for our research group, and
> just to see how this stuff works I made a loopback array on one of my
> machines. I created 5 loopback devices of 1mb each, created a raid5
> array and formatted. so far so good. I could copy files on and off,
> fail a disk with mdadm -f  and then return it and everything seemed to
> work as i expected. Then I decided to see what happens if things go
> bad, so i fail one disk. fine, array reports "clean, degraded" but I
> can still access files. Then I fail another, now expecting not to be
> able to read anything. But, array reports "clean, degraded" and I can
> still access the files. I then proceeded to fail ALL disks and the
> array was still "clean, degraded" and I could read the files on it
> just as well as before??? Can anyone explain to me what's going on
> here? Was I seeing some cached version (given that the array was so
> small)?

I think you'd need to post the commands you used and the results of 
things like mdadm --detail and cat /proc/mdstat
also kernel version, mdadm version etc.

That way we can ensure you really did fail the right drives etc etc.

Right now it could be anything from (allowed!) user error to a weird ppc
thing...

FYI I run a 1.2Tb array on 6x250Gb SATA drives (1 spare) with lvm2 and xfs

David


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux