On Sun, 25 May 2008, Marc wrote:
[postgres@dbnya1 ~]$ pgbench -p 5462 -c 20 -t 100 pgbench-md3000 starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 1 number of clients: 20 number of transactions per client: 100 number of transactions actually processed: 2000/2000
pgbench run like this is pretty much worthless. In order to get useful results from it, you need:
1) The database scale, as specified by "pgbench -i -s <scale>", to be larger than the number of clients
2) Run the benchmark for a fairly long time. 2000 transactions is barely doing anything. You want to aim at close to 100,000 for a quick test and ten times that for a serious one.
What you're seeing right now is how long it takes to sync 2000 transactions to disk, which is an interesting number but probably not what you intended to measure. It's not enough data to even write anything to the main database disk, will all just get cached in memory and written out after the test is over.
Increasing the scale can be tricky, as then you need to consider how much RAM and caching are involved. I started putting some articles on this topic at http://www.westnet.com/~gsmith/content/postgresql/ you should find useful. I hope you know to do things like increase shared_buffers to take advantage of the RAM in your server.
So, for some strange reason, pgsql is struggling with performance when being run off this external disk.
Your internal disk is probably caching writes and isn't safe to run a database from, so it's cheating. If you run a much longer test with a much larger database scale, the array may pull ahead anyway. See http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm for notes on that topic.
-- * Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD