>>> On Fri, Mar 17, 2006 at 6:24 am, in message <20060317152448.452e4854.eugrid@xxxxxxxxxxxx>, Evgeny Gridasov <eugrid@xxxxxxxxxxxx> wrote: > > I've maid some tests with pgbench If possible, tune the background writer with your actual application code under normal load. Optimal tuning is going to vary based on usage patterns. You can change these settings on the fly by editing the postgresql.conf file and running pg_ctl reload. This is very nice, as it allowed us to try various settings in our production environment while two machines dealt with normal update and web traffic and another was in a saturated update process. For us, the key seems to be to get the dirty blocks pushed out to the OS level cache as soon as possible, so that the OS can deal with them before the checkpoint comes along. > for all tests: > checkpoint_segments = 16 > checkpoint_timeout = 900 > shared_buffers=65536 > wal_buffers=128: > ./pgbench - c 32 - t 500 - U postgres regression Unless you are going to be running in short bursts of activity, be sure that the testing is sustained long enough to get through several checkpoints and settle into a "steady state" with any caching controller, etc. On the face of it, it doesn't seem like this test shows anything except how it would behave with a relatively short burst of activity sandwiched between big blocks of idle time. I think your second test may look so good because it is just timing how fast it can push a few rows into cache space. > Setting bgwriter_delay to higher values leads to slower postgresql shutdown time > (I see postgresql writer process writing to disk). Sometimes postgresql didn't > shutdown correctly (doesn't complete background writing ?). Yeah, here's where it gets to trying to finish all the work you avoided measuring in your benchmark. -Kevin