On Wed, 2006-06-14 at 12:04, John Vincent wrote: > > On 6/14/06, Scott Marlowe <smarlowe@xxxxxxxxxxxxxxxxx> wrote: > On Wed, 2006-06-14 at 09:47, John E. Vincent wrote: > > -- this is the third time I've tried sending this and I > never saw it get > > through to the list. Sorry if multiple copies show up. > > > > Hi all, > > BUNCHES SNIPPED > > > work_mem = 1048576 ( I know this is high but you should see > some of our > > sorts and aggregates) > > Ummm. That's REALLY high. You might want to consider > lowering the > global value here, and then crank it up on a case by case > basis, like > during nighttime report generation. Just one or two queries > could > theoretically run your machine out of memory right now. Just > put a "set > work_mem=1000000" in your script before the big query runs. > > > I know it is but that's what we need for some of our queries. Our ETL > tool (informatica) and BI tool (actuate) won't let us set those things > as part of our jobs. We need it for those purposes. We have some > really nasty queries that will be fixed in our new server. Description of "Queries gone wild" redacted. hehe. Yeah, I've seen those kinds of queries before too. you might be able to limit your exposure by using alter user: alter user userwhoneedslotsofworkmem set work_mem=1000000; and then only that user will have that big of a default. You could even make it so that only queries that need that much log in as that user, and all other queries log in as other folks. Just a thought. I just get REAL nervous seeing a production machine with a work_mem set that high.