Stephen Tyler wrote:
I don't understand how maxwritten_clean could be as high as 95058, and increment at more than 1 per second. This is a process count, not a buffer count? How often is the background cleaner launched? Does that mean I need to massively increase bgwriter_lru_maxpages, and other bgwriter params? They are currently default values.maxwritten_clean is a straight out count of times that even happened, not a buffer count. The background writer runs at whatever frequency bgwriter_delay is set to, which defaults to 200ms for 5 executions/second. You could increase bgwriter_lru_maxpages and the rest, but those actually make the system less efficient if you're having trouble just keeping up with checkpoint I/O. They're more aimed to improve latency on systems where there's enough I/O to spare that you can write ahead a bit even if costs you an occasional penalty. In your situation, I'd turn bgwriter_lru_maxpages=0 and just get rid of it altogether--it's really unlike it's helping, and it might be making things a tiny bit worse. checkpoint_segments = 128I think that given your backing disk situation and the situation you're in, you could easily justify 256 on your system. It's not documented very well, that's for sure. Setting it too high won't hurt you, just takes up a tiny amount of RAM. I've been talking recently with people about increasing the standard recommendation for that to 16MB. The only other option that might help you out here is to turn off synchronous_commit for either the whole system, or just for transactions where durability isn't that important. That introduces a small risk of data loss, but not the risk of corruption that turning fsync off altogether does. Basically, it reduces the number of fsync's from transaction commits to be a fixed number per unit of time, rather than being proportional to the number of commits. -- Greg Smith 2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support greg@xxxxxxxxxxxxxxx www.2ndQuadrant.com |