On Sun, 7 Dec 2008, Scott Marlowe wrote:
When I last used pgbench I wanted to test it with an extremely large
dataset, but it maxes out at -s 4xxx or so, and that's only in the
40Gigabyte range. Is the limit raised for the pgbench included in
contrib in 8.4? I'm guessing it's an arbitrary limit.
There's no artificial limit, just ones that result from things like
integer overflow. I don't think has been an issue so far because pgbench
becomes seek limited and stops producing interesting results once the
database exceeds the sum of all available caching, which means you'd need
more than 32GB of RAM in the system running pgbench before this is an
issue. Which happens to be the largest size system I've ever ran it on...
I'd expect this statement around line 1060 of pgbench.c
to overflow first:
for (i = 0; i < naccounts * scale; i++)
Where i is an integer, naccounts=100,000 , and scale is what you input.
That will overflow a signed 32-bit integer at a scale of 2147. If you had
tests that failed at twice that, I wonder if you were actually executing
against a weird data set for scales of (2148..4294).
It's not completely trivial to fix (I just tried), the whole loop needs to
be restructured to run against scale and 100,000 separately while keeping
the current progress report intact. I'll take a shot at fixing it
correctly, this is a bug that should be corrected before 8.4 goes out. I
guarantee that version will be used on systems with 64GB of RAM where this
matters.
--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance