anyways, I seriously doubt we could convince operations at our manufacturing facilities to add ramdrives to their mostly HP servers. I don't even know if they'd fit in the blade servers most commonly used.
John, I am not trying to convince you to do ram drives. I just want to point out the below.
On the point of hardware raid with a battery backed write cache, its write performance comes mainly from the cache itself and not from the fact that the raid processing is done on the card. For this reason, raid card manufacturers such as 3ware and Areca offer up to 2GB cache sizes.
However, if you are using raid5 for the array and a disk drops out, that raid card is going to have severe performance penalties even when writing due to the processor on the card not being able to keep up and the benefit of the cache is nullified. Same story if the processor sucks.
In such cases, software raid will perform better due to its use of the system processor for processing.
So if you run raid10, then the hardware raid with write cache will probably be the thing to do since it most likely handles mirrors better than the md driver although I feel more warm and fuzzy about the filesystem handling its journal on a ramdrive.
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos