Matthew Wakeling wrote:
On Sat, 7 Feb 2009, justin wrote:
In a big databases a checkpoint could get very large before time had
elapsed and if server cashed all that work would be rolled back.
No. Once you commit a transaction, it is safe (unless you play with
fsync or asynchronous commit). The size of the checkpoint is irrelevant.
You see, Postgres writes the data twice. First it writes the data to
the end of the WAL. WAL_buffers are used to buffer this. Then Postgres
calls fsync on the WAL when you commit the transaction. This makes the
transaction safe, and is usually fast because it will be sequential
writes on a disc. Once fsync returns, Postgres starts the (lower
priority) task of copying the data from the WAL into the data tables.
All the un-copied data in the WAL needs to be held in memory, and that
is what checkpoint_segments is for. When that gets full, then Postgres
needs to stop writes until the copying has freed up the checkpoint
segments again.
Matthew
Well then we have conflicting instructions in places on
wiki.postgresql.org which links to this
http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance