On Mon, May 9, 2011 at 3:32 PM, Chris Hoover <revoohc@xxxxxxxxx> wrote: > I've got a fun problem. > My employer just purchased some new db servers that are very large. The > specs on them are: > 4 Intel X7550 CPU's (32 physical cores, HT turned off) > 1 TB Ram > 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10) > 3TB Sas Array (48 15K 146GB spindles) my GOODNESS! :-D. I mean, just, wow. > The issue we are running into is how do we benchmark this server, > specifically, how do we get valid benchmarks for the Fusion IO card? > Normally to eliminate the cache effect, you run iozone and other benchmark > suites at 2x the ram. However, we can't do that due to 2TB > 1.3TB. > So, does anyone have any suggestions/experiences in benchmarking storage > when the storage is smaller then 2x memory? hm, if it was me, I'd write a small C program that just jumped directly on the device around and did random writes assuming it wasn't formatted. For sequential read, just flush caches and dd the device to /dev/null. Probably someone will suggest better tools though. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance