problem w/crazy config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm using a somewhat stupid config. due to lack of dosh.

I have two promise PCI dual-channel EIDE cards, each with 4 drives on.

I have 4 200GB drives in a RAID-5 array as masters of each channel, and 4 80GB drives in a RAID-5 array as slaves of each channel.

I also upgraded mdadm to mdadm-1.9.0-1 to fix the problem with 'auto' in mdadm.conf, but I'm not sure if that actually makes any difference.

In any case, the problem is that one of my drives keeps going 'dirty'. I have since commented out the other array in fstab so it isn't being used, and it works fine now.

I did a 'smartctl -t long' and used the ibm disk tool thingy to test the drive, and both say it is fine.

I figured it would be OK to make one array as master devices and one array as slaves because I 'know' I will 'never' access both at the same time - at least, not intensively.

Can anyone suggest why this might be happening?

Max.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux