On Fri, Nov 28, 2008 at 12:51 PM, Scott Carey <scott@xxxxxxxxxxxxxxxxx> wrote: > I'm not sure that postgres allocates the whole work_mem each time, and in any event, the OS won't try and allocate to RAM until the page is actually used. My experience is that oversized work_mem doesn't hurt until it is actually used. Else, my configuration with 1000 concurrent connections and work_mem = 1GB would have blown up. I don't have that much RAM + SWAP * overcommit. Of the 1000 connections, only 8 run queries that would ever need more than 2 or 3 MB of space to execute. Of course, one has to be certain what the client connections do for it to be very over sized, so I would not recommend the above in general. That's kinda like saying I have a revolver with 100 chambers and only one bullet, and it seems perfectly safe to put it to my head and keep pulling the trigger. Of course pg doesn't allocate the whole amount every time. It allocates what it needs, up to the max you allow. by setting it to 1G it's quite possible that eventually enough queries will be running that need a fair bit of work_mem and suddenly bang, your server is in a swap storm and goes non-responsive. It's far better to set it to something reasonable, like 4 or 8 Meg, then for the individual queries that need more set it higher at run time. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance