Greg Smith <gsmith@xxxxxxxxxxxxx> writes: > On Mon, 16 Mar 2009, Gregory Stark wrote: > >> Why would checkpoints force out any data? It would dirty those pages and then >> sync the files marking them clean, but they should still live on in the >> filesystem cache. > > The bulk of the buffer churn in pgbench is from the statement that updates a > row in the accounts table. That constantly generates updated data block and > index block pages. If you can keep those changes in RAM for a while before > forcing them to disk, you can get a lot of benefit from write coalescing that > goes away if constant checkpoints push things out with a fsync behind them. > > Not taking advantage of that effectively reduces the size of the OS cache, > because you end up with a lot of space holding pending writes that wouldn't > need to happen at all yet were the checkpoints spaced out better. Ok, so it's purely a question of write i/o, not reduced cache effectiveness. I think I could see that. I would be curious to see these results with a larger checkpoint_segments setting. Looking further at the graphs I think they're broken but not in the way I had guessed. It looks like they're *overstating* the point at which the drop occurs. Looking at the numbers it's clear that under 1GB performs well but at 1.5GBP it's already dropping to the disk-resident speed. I think pgbench is just not that great a model for real-world usage . a) most real world workloads are limited by read traffic, not write traffic, and certainly not random update write traffic; and b) most real-world work loads follow a less uniform distribution so keeping busy records and index regions in memory is more effective. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance