On 3/13/17 11:53 PM, Steven Chang wrote: Thanks, Steven, An interesting statement: "long-running open transaction, or a faulty archive_command script, may cause Postgres to create too many files. Ultimately, this will cause the disk they are on to run out of space, at which point Postgres will shut down." The statement, "long-running open transaction...may cause Postgres to create too many files" makes it sound like there could be the case where there is not a 1-to-1 relationship between pg_xlog and the archive. I've never seen that before and didn't think that was the case. Maybe I'm reading too much into the statement. I have to suspect that the "faulty" nature of the archive_command was that it was just asking too much of the I/O subsystem and was just getting way behind. Shockingly way behind. I didn't expect it. It is a very busy system will many long running transactions. If I ever get back to that problem, my first pass will be to reduce the load request. I suppose I can do some bonnie tests to simulate the archive writing. I was doing some multiplexing of the files in the archive shell. Let's see what writing them once locally does first. If it can't do that and keep up, I'm going to need a "bigger boat." It's a production system so I only get a few cracks at it a year. I'll probably wait for the next patch release. Going back to the last backup is within the SLA, but I don't like not having the archive. |