Hi,
I'm using a somewhat stupid config. due to lack of dosh.
I have two promise PCI dual-channel EIDE cards, each with 4 drives on.
I have 4 200GB drives in a RAID-5 array as masters of each channel, and 4 80GB drives in a RAID-5 array as slaves of each channel.
I also upgraded mdadm to mdadm-1.9.0-1 to fix the problem with 'auto' in mdadm.conf, but I'm not sure if that actually makes any difference.
In any case, the problem is that one of my drives keeps going 'dirty'. I have since commented out the other array in fstab so it isn't being used, and it works fine now.
I did a 'smartctl -t long' and used the ibm disk tool thingy to test the drive, and both say it is fine.
I figured it would be OK to make one array as master devices and one array as slaves because I 'know' I will 'never' access both at the same time - at least, not intensively.
Can anyone suggest why this might be happening?
Max. - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html