I'm gearing up to do some serious investigation into performance for PostgreSQL with regard to our application. I have two issues that I've questions about, and I'll address them in two seperate emails. This email regards the tuning of work_mem. I'm planning on going through all of the queries our application does, under various load scenarios and approaching each performance issue as it appears. What I'm fuzzy on is how to discretely know when I'm overflowing work_mem? Obviously, if work_mem is exhausted by a particular query, temp files will be created and performance will begin to suck, but it would be nice to have some more information -- how large was the resultant temp file, for example. Does the creation of a temp file trigger any logging? I've yet to see any, but we may not have hit any circumstances where work_mem was exhausted. I've been looking through the docs at the various pg_stat* views and functions, but it doesn't look as if there's anything in there about this. That leads to my other question. Assuming I've got lots of connections (which I do), how can I determine if work_mem is too high? Do server processes allocated it even if they don't actually use it? Is the only way to find out to reduce it and see when it starts to be a problem? If so, that leads back to my first question: how can I be sure whether temp files were created or not? My goal is to set work_mem as small as is possible for the most common queries, then force the developers to use "set work_mem to x" to adjust it for big queries. -- Bill Moran Collaborative Fusion Inc. wmoran@xxxxxxxxxxxxxxxxxxxxxxx Phone: 412-422-3463x4023