Hi all,
I have a database that we've been doing some load testing on. The
performance isn't quite where we want it to be, and while I don't
believe it's a database issue, I'm doing what I can to help speed things up.
We started off with an empty database, and started off our benchmarking
test. After 30 minutes, the database size grew to around 400MB, not very
large at all. While we did the test I grabbed some statistics and found
some insteresting numbers in pg_stat_bgwriter.
Over the 30 minute test, there were 0 buffers written by checkpoints (I
have my checkpoint timeout set to 30 minutes and checkpoint segments to
64, no checkpoints occurred during our test), 0 buffers written by the
background writer, and 44,000 "buffers written by backends (that is, not
by the background writer)"
From my understanding, this means that during the insert, the data is
being inserted directly to disk immediately vs being taken care of later
by the background writer and checkpoints. Is this just expected and
unavoidable behavior when inserting data?
Again, this isn't a ton of data and the database is performing fast from
what I've observed. But if there is a way that during a large data
import process we can have buffers not be written directly to disk
during these large inserts and instead be taken care of later, that it
would be ideal.
The application we tested basically has thousands of tasks to do and can
only do 80 or so of them at any given time, so any milliseconds I can
save per transaction will help timing out in the end.
My setup:
OS: CentOS 6 Linux 64 bit
PostgreSQL Version: 9.1.2 Installed from source.
14 GB system memory
work_mem =50MB
shared_buffers = 4GB
checkpoint_timeout = 30 minutes
checkpoint_completion_target = 0.5 (default)
bgwriter_delay = 200ms (default)
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 2 (default)
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin