Greg Smith wrote:
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results you'd need to put about 4
drives and a BBU into a server. The worst-case latency on that
solution is pretty bad though, when a lot of random writes are queued
up; I suspect that's where the SSD will look much better.
By the way: if you want to run a lot more tests in an organized
fashion, that's what http://github.com/gregs1104/pgbench-tools was
written to do. That will spit out graphs by client and by scale
showing how sensitive the test results are to each.
Got it, running the default config right now.
When you say 'comparable to a small array' - could you give a ballpark
figure for 'small'?
regards,
Yeb Havinga
PS: Some update on the testing: I did some ext3,ext4,xfs,jfs and also
ext2 tests on the just-in-memory read/write test. (scale 300) No real
winners or losers, though ext2 isn't really faster and the manual need
for fix (y) during boot makes it impractical in its standard
configuration. I did some poweroff tests with barriers explicitily off
in ext3, ext4 and xfs, still all recoveries went ok.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance