On 05/09/2011 03:32 PM, Chris Hoover wrote:
So, does anyone have any suggestions/experiences in benchmarking storage
when the storage is smaller then 2x memory?
We had a similar problem when benching our FusionIO setup. What I did
was write a script that cleared out the Linux system cache before every
iteration of our pgbench tests. You can do that easily with:
echo 3 > /proc/sys/vm/drop_caches
Executed as root.
Then we ran short (10, 20, 30, 40 clients, 10,000 transactions each)
pgbench tests, resetting the cache and the DB after every iteration. It
was all automated in a script, so it wasn't too much work.
We got (roughly) a 15x speed improvement over a 6x15k RPM RAID-10 setup
on the same server, with no other changes. This was definitely
corroborated after deployment, when our frequent periods of 100% disk IO
utilization vanished and were replaced by occasional 20-30% spikes. Even
that's an unfair comparison in favor of the RAID, because we added DRBD
to the mix because you can't share a PCI card between two servers.
If you do have two 1.3TB Duo cards in a 4x640GB RAID-10, you should get
even better read times than we did.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@xxxxxxxxx
______________________________________________
See http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance