Jeff Ross wrote:
I'm trying to put a new server on line and I'm having a problem
getting any kind of decent performance from it. pgbench yields around
4000 tps until scale and clients both are above 21, then I see the
following:
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_accounts_pkey" for table "pgbench_accounts"
ERROR: out of memory
DETAIL: Failed on request of size 67108864.
You've got "maintenance_work_mem = 240MB", but it looks your OS is not
allowing you to allocate more than around 64MB. Have you looked at the
active ulimit settings for the accounts involved?
The controller cache is set to write thru for all three volumes
because tests using dd and bonnie++ show that write thru is twice as
fast as write back. I haven't dug into that any more to figure out why.
That's bizarre, and you'll never get good pgbench results that way
regardless of what dd/bonnie++ say--pgbench does database commits, which
is what you need the cache to accelerate, while those two tests don't.
But I don't think this is relevant to your immediate problems, because
you're running the select-only test so far, which isn't doing writes at
all. Regardless, if I got a new system and it performed worse on
dd/bonnie++ with the cache turned on, I'd send it back.
time pgbench -h $HOST -t 2000 -c $SCALE -S pgbench
Your number of transactions here is extremely low. I'd bet you're just
measuring startup overhead here. Try using 20,000 per client to start
instead and see what happens. On the select-only, you can easily need
1M total transactions to get an accurate reading here.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx www.2ndQuadrant.com
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general