> with 100 concurrent postgres connections, if they all did something > requiring large amounts of work_mem, you could allocate 100 * 125MB (I > believe thats what you said it was set to?) which is like 12GB :-O > > in fact a single query thats doing multiple sorts of large datasets for > a messy join (or other similar activity) can involve several instances > of workmem. multiply that by 100 queries, and ouch. > > have you considered using a connection pool to reduce the postgres > process count? We do have a connection pool here, and as I mentioned about a quarter of these are Slony-controlled processes for replication. Most connections are not doing complex queries of this sort, in fact when this query runs it is the only query running, most connections are idle (kept open by a connection pool) or performing short insert/update operations. But I see your point about the large work_mem, it was set that high to help speed up big queries such as this one. -- m@ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general