> -----Original Message----- > From: Andres Freund [mailto:andres@xxxxxxxxxxx] > Sent: Tuesday, August 14, 2018 12:31 PM > To: Reza Taheri <rtaheri@xxxxxxxxxx> > Cc: pgsql-performance@xxxxxxxxxxxxxx > Subject: Re: Calculating how much redo log space has been used > > Hi, > > On 2018-08-14 18:51:34 +0000, Reza Taheri wrote: > > Also, it looks like the full_page_writes parameter is the only thing > > that can help reduce the log usage size > > There's also wal_compression. > > > > Another requirement is a very short, 6-minute checkpoint time, which > > means we will likely write the full page very often. Yes, my hands are > > tied! > > Why is that a requirement / how is specifically phrased? Is it a bounded > recovery time? > > Greetings, > > Andres Freund Hi Andres, Good to know about wal_compression. It gives us a good reason to upgrade to 9.5 to get that feature. The need for a 6-minute checkpoint came from this requirement in the benchmark specification: the database contents (excluding the transaction log) stored on Durable Media cannot be more than 12 minutes older than any Committed state of the database. Comment: This may mean that Database Management Systems implementing traditional checkpoint algorithms may need to perform checkpoints twice as frequently (i.e. every 6 minutes) in order to guarantee that the 12-minute requirement is met. But in any case, I now realize that I was going into the weeds, looking at the wrong thing. My original issue was figuring out how quickly we churn through checkpoint segment files, and had been looking at the checkpoint stats in pgstatspack to figure that out. But that's the wrong place to look. I don't think there is anything in the pgstatspack output that can give me that information. I can tell by looking at the timestamps of the checkpoint segment files, but I was hoping to find something that gets logged in pg_log/postgresql-*log and tells me when we switch to a new log Thanks, Reza