John R Pierce wrote:
> I can accept faster in certain cases but if you say HUGELY faster, I would like to see some numbers.
now, everything I've described above is a rather unusual application... so let me present a far more common scenarios...
not so. I used to run boxes that handled 600 to 1000 smtp connections each. Creating and deleting thousands of small files was the environment I worked in.
Relational DB Management Servers, like Oracle, or PostgreSQL. when the RDBMS does a 'commit' at transaction END;, the server HAS to fsync its buffers to disk to maintain data integrity. With a writeback cache disk controller, the controller can acknowlege the writes as soon as the data is in its cache, then it can write that data to disk at its leisure. With software RAID, the server has to wait until ALL drives of the RAID slice have seeked, and completed the physical writes to the disk. In a write intensive database, where most of the read data is cached to memory, this is a HUGE performance hit.
I must say that the 3ware cards on those boxes that had them were not 955x series and therefore had no cache. Perhaps things would have been different if it were 955x cards in there but at that time, the 9xxx 3ware cards were not even out yet.
Since you have clearly pointed out the performance benefit really comes from the cache (if you have enough) on the board I do not see why using software raid and a battery-backed RAM card like the umem or even the gigabyte i-ram for the journal of the filesystem will be any less slow if at all.
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos