Re: question on hash joins

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]" <robert.m.hartranft@xxxxxxxx> writes:
> Given that the hash would only contain keys and values needed for supporting
> the query I am having a hard time understanding why I am exceeding the 
> 10 GB temp_file_limit.

Because *both sides* of the join are getting dumped to temp files.
This is necessary when the hash requires multiple batches.  All
but the first batch of hash keys get written out to files, and then
we reload each batch of the inner relation into the in-memory table
and scan the corresponding batch file from the outer relation.

If you can make work_mem large enough to hold the inner relation
then the problem should go away.  Note though that the per-row
overhead is significantly larger in the in-memory representation;
don't have an estimate for that offhand.

			regards, tom lane


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux