"John D. Burger" <john@xxxxxxxxx> writes: > Why doesn't the postmaster read the db files directly, presumably > using some of the same code the backends do, or is too hard to bypass > the shared memory layer? It's not "too hard", it's simply wrong. The copy on disk may be out of date due to not having been flushed from shared buffers yet. Moreover, without any locking you can't ensure you get a consistent view of the data. > Another thing you folks must have > considered would be to keep the out-of-memory copies of this kind of > data in something faster than a flat file - say Berkeley DB. Do > either of these things make sense? If I were going to do anything about this, I'd think about teaching the postmaster about some kind of incremental-update protocol instead of rereading the whole flat file every time. The issue with any such idea is that it pushes complexity, and therefore risk of bugs, into the postmaster which is exactly where we can't afford bugs. Given the lack of actual performance complaints from the field so far, I'm not inclined to do anything for now ... regards, tom lane