On 26.11.2011 19:08, Tom Lane wrote: > Tomas Vondra <tv@xxxxxxxx> writes: >> Anyway the output is a bit strange. It's writing a lot of temp files >> that are significantly smaller (about 3MB) than work_mem (128MB). > > The most obvious guess about what's doing that is a hash join that has > a drastic overestimate of how many rows it has to hash, so that it's > batching the join unnecessarily. AFAIR, sorts don't spill to disk until > they've actually reached work_mem, but hash join works differently. OK, so it probably is not sorting, probably. What about other nodes using hash table - aggregates etc.? Would those spill to the disk in case of overestimation? Gaetan did not post the whole query, but it starts like this: SELECT COUNT(*) FROM (SELECT DISTINCT "table"."id" AS "id", "table"."flux_id" AS "flux_id", "table"."locale_id" AS "locale_id", ... so it obviously uses a DISTINCT clause, probably evaluated using a hash aggregate. And this is allegedly one of the queries that write a lot. Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general