I think , you may add a ramdisk as tablespace for temporary tables. This should work similar to bigger work_mem. 2011/9/12, Robert Schnabel <schnabelr@xxxxxxxxxxxx>: > > On 9/12/2011 12:57 PM, Shaun Thomas wrote: >> On 09/12/2011 12:47 PM, Andy Colson wrote: >> >>> work_mem is not the total a query can use. I believe each step can >>> use that much, and each backend can use it for multiple bits. So if >>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = >>> 8GB. >> Exactly. Find a big query somewhere in your system. Use EXPLAIN to >> examine it. Chances are, that one query has one or more sorts. Each one >> of those gets its own work_mem. Each sort. The query have four sorts? It >> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to >> 16MB on a test system. During a load test, the machine ran out of >> memory, swapped out, and finally crashed after the OOM killer went nuts. >> >> Set this value *at your own risk* and only after *significant* testing. >> Having it too high can have rather unexpected consequences. Setting it >> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a >> very, very bad idea. >> > Yep, I know. But in the context of the data warehouse where *I'm the > only user* and I have a query that does, say 4 large sorts like > http://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried > about using 8GB or 16GB in the case of work_mem = 4GB. I realize the > query above only used 1.9GB for the largest sort but I know I have other > queries with 1 or 2 sorts that I've watched go to disk. > > Bob > > > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- ------------ pasman -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance