Alvaro Herrera <alvherre@xxxxxxxxxxxxxxxxx> wrote: > Alvaro Herrera wrote: >> Kevin Grittner wrote: > >> > Anyway, given that these are replication targets, and aren't >> > the "database of origin" for any data of their own, I guess >> > there's no reason not to try asynchronous commit. >> >> Yeah; since the transactions only ever write commit records to >> WAL, it wouldn't matter a bit that they are lost on crash. And >> you should see an improvement, because they wouldn't have to >> flush at all. > > Actually, a transaction that performed no writes doesn't get a > commit WAL record written, so it shouldn't make any difference at > all. Well, concurrent to the web application is the replication. Would asynchronous commit of that potentially alter the pattern of writes such that it had less impact on the reads? I'm thinking, again, of why the placement of the pg_xlog on a separate file system made such a dramatic difference to the read-only response time -- might it make less difference if the replication was using asynchronous commit? By the way, the way our replication system works is that each target keeps track of how far it has replicated in the transaction stream of each source, so as long as a *later* transaction from a source is never persisted before an *earlier* one, there's no risk of data loss; it's strictly a performance issue. It will be able to catch up from wherever it is in the transaction stream when it comes back up after any down time (planned or otherwise). By the way, I have no complaints about the performance with the pg_xlog directory on its own file system (although if it could be *even faster* with a configuration change I will certainly take advantage of that). I do like to understand the dynamics of these things when I can, though. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance