Matt Magoffin wrote:
We have 100+ postgres processes running, so for an individual process, could the 1024 file limit be doing anything to this query? Or would I see an explicit error message regarding this condition?
with 100 concurrent postgres connections, if they all did something requiring large amounts of work_mem, you could allocate 100 * 125MB (I believe thats what you said it was set to?) which is like 12GB :-O
in fact a single query thats doing multiple sorts of large datasets for a messy join (or other similar activity) can involve several instances of workmem. multiply that by 100 queries, and ouch.
have you considered using a connection pool to reduce the postgres process count?
-- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general