Re: New to PostgreSQL, performance considerations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 12 Dec 2006, Daniel van Ham Colchete wrote:

I'm making some other tests here at another hardware (also Gentoo). I
found out that PostgreSQL stops for a while if I change the -t
parameter on bgbench from 600 to 1000 and I have ~150 tps instead of
~950tps.

Sure sounds like a checkpoint to me; the ones pgbench generates really aren't fun to watch when running against IDE drives. I've seen my test system with 2 IDE drives pause for 15 seconds straight to process one when fsync is on, caching was disabled on the WAL disk, and the shared_buffer cache is large.

If you were processing 600 transactions/client without hitting a checkpoint but 1000 is, try editing your configuration file, double checkpoint_segments, restart the server, and then try again. This is cheating but will prove the source of the problem.

This kind of behavior is what other list members were trying to suggest to you before: once you get disk I/O involved, that drives the performance characteristics of so many database operations that small improvements in CPU optimization are lost. Running the regular pgbench code is so wrapped in disk writes that it's practically a worst-case for what you're trying to show.

I would suggest that you run all your optimization tests with the -S parameter to pgbench that limits it to select statements. That will let you benchmark whether the core code is benefitting from the CPU improvements without having disk I/O as the main driver of performance.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux