On 01/22/2014 06:56 PM, Tom Lane wrote:
Florian Weimer <fweimer@xxxxxxxxxx> writes:
I've got a query which causes PostgreSQL to create hundreds of thousands
of temporary files, many of them empty. The process also needs a lot of
memory. I suspect this is due to bookkeeping for those files.
The query looks like this:
[ huge hash join ]
I track this down to a lower-than-usual setting of work_mem, to 1MB,
after the upgrade to 9.3.
The system is trying to do the join with only 1MB of workspace, so
yes, you end up with lots and lots of small temporary files.
Is this a bug?
No.
It's still quite surprising that this temporarily needs multiple
gigabytes of RAM, much more than what's required in in terms of work_mem
to make this query run quickly.
Is there an easy way to check if there is a memory leak in the file
descriptor switching code confined to the current transaction, or
something like that? It seems a bit unlikely that the per-file
bookkeeping overhead is larger than 10 KB.
--
Florian Weimer / Red Hat Product Security Team
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general