Jeff Ross wrote:
I think I'm doing it right. Here's the whole script. I run it from
another server on the lan.
That looks basically sane--your description was wrong, not your program,
which is always better than the other way around.
Note that everything your script is doing and way more is done quite
easily with pgbench-tools:
http://git.postgresql.org/gitweb?p=pgbench-tools.git;a=summary
You can just dump a list of scales and client counts you want to test
and let that loose, it will generate graphs showing TPS vs.
scale/clients and everything if gnuplot is available.
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 70
number of transactions per client: 20000
number of transactions actually processed: 1400000/1400000
tps = 293.081245 (including connections establishing)
tps = 293.124705 (excluding connections establishing)
This is way more clients than your server is going to handle well on
pgbench's TPC-B test, which is primarily a test of hard disk write speed
but it will get bogged down with client contention in many conditions.
Performance degrades considerably as the number of clients increases
much past the number of cores in the server; typically 1.5 to 2X as many
clients as cores gives peak throughput.
I'm not sure what's causing your panic--not enough BSD practice. But I
think Tom's suggestion of vastly decreasing from:
maintenance_work_mem = 240MB
Is worth trying. Reducing it won't hurt pgbench performance on quick
tests, just how long it takes to get the tests setup.
Sorry about pgtune being a bit aggressive in what it suggests--on the
TODO list to scale it back, and hopefully provide more helpful
suggestions for kernel tuning too.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx www.2ndQuadrant.com
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general