Re: Postgres benchmarking with pgbench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

thanks a lot for your hints. I changed my config and changed raid6 to raid10, but whatever i do, the benchmark breaks down at a scaling factor 75 where the database is "only" 1126MB big.

Here are my benchmark Results (scaling factor, DB size in MB, TPS) using:
  pgbench -S -c  X  -t 1000 -U pgsql -d benchmark -h MYHOST

1 19 8600
5 79 8743
10 154 8774
20 303 8479
30 453 8775
40 602 8093
50 752 6334
75 1126 3881
150 2247 2297
200 2994 701
250 3742 656
300 4489 596
400 5984 552
500 7479 513

I have no idea if this is any good for a QuardCore Intel(R) Xeon(R) CPU E5320 @ 1.86GHz with 4GB Ram and 6 SATA disk (7200rpm) in raid 10.

Here is my config (maybe with some odd setting): http://pastebin.com/m5d7f5717

I played around with:
- max_connections
- shared_buffers
- work_mem
- maintenance_work_mem
- checkpoint_segments
- effective_cache_size

..but whatever i do, the graph looks the same. Any hints or tips what my config should look like? Or are these results even okay? Maybe i am driving myself crazy for nothing?

Cheers,
Mario


Greg Smith wrote:
On Mon, 16 Mar 2009, ml@xxxxxxxxx wrote:

Any idea why my performance colapses at 2GB Database size?

pgbench results follow a general curve I outlined at http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm and the spot where performance drops hard depends on how big of a working set of data you can hold in RAM. (That shows a select-only test which is why the results are so much higher than yours, all the tests work similarly as far as the curve they trace).

In your case, you've got shared_buffers=1GB, but the rest of the RAM is the server isn't so useful to you because you've got checkpoint_segments set to the default of 3. That means your system is continuously doing small checkpoints (check your database log files, you'll see what I meant), which keeps things from ever really using much RAM before everything has to get forced to disk.

Increase checkpoint_segments to at least 30, and bump your transactions/client to at least 10,000 while you're at it--the 32000 transactions you're doing right now aren't nearly enough to get good results from pgbench, 320K is in the right ballpark. That might be enough to push your TPS fall-off a bit closer to 4GB, and you'll certainly get more useful results out of such a longer test. I'd suggest adding in scaling factors of 25, 50, and 150, those should let you see the standard pgbench curve more clearly.

On this topic: I'm actually doing a talk introducing pgbench use at tonight's meeting of the Baltimore/Washington PUG, if any readers of this list are in the area it should be informative: http://archives.postgresql.org/bwpug/2009-03/msg00000.php and http://omniti.com/is/here for directions.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD



--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux