* Robert Schnabel (schnabelr@xxxxxxxxxxxx) wrote: > And getting back to the to-do list entry and reading the related > posts, it appears that even if you could set work_mem that high it > would only use 2GB anyway. I guess that was the second part of my > question. Is that true? Yes and no. work_mem is used by the planner to figure out what kind of plan to use. The planner plans things based off of statistics, but it's not perfect, especially on large tables with lots of data which have dependent data between columns. Where the 2GB limit comes into play is when you end up with a plan that does, say, a large sort. PG will use memory for the sort up to work_mem, or 2GB, whichever is lower, and spill to disk after that. I don't believe it has such a limit for a hash table, due to how the data structures for the hash table are allocated (and I recall seeing single PG queries that use hash tables getting into the 30+GB range, of course, I had work_mem set upwards of 100GB on a 32GB box... :). So, if you're doing data warehousing, and you're pretty much the only user (or there's only one at a time), setting it up pretty high is acceptable, but you do need to watch the box and make sure you don't run it out of memory. Also, make sure you have things configured correctly, if you're using Linux, to prevent the OOM killer from kicking in. Also, as I suggested before, set it to a reasonable level for the 'default' and just up it for specific queries that may benefit from it. Thanks, Stephen
Attachment:
signature.asc
Description: Digital signature