Re: two table join just not fast enough.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Tom,
And looks like I pasted an older explain plan, which is almost exactly the same as the one with 50MB work_mem, except for the hash join 'buckets' part which used more memory and only one 'bucket' so to speak.

When running with the 50MB work_mem over 1MB work_mem, the query went from an average of 190 ms to 169 ms, so it did help some but it wasn't a game changer (I even found for this specific query, 6MB of work_mem was the most that would actually help me).

I have other plans to try to get this thing running faster, I'll be exploring them tomorrow, as well as looking at using an enum type.

- Brian F

On 11/02/2011 05:53 PM, Tom Lane wrote:
Brian Fehrle<brianf@xxxxxxxxxxxxxxxxxxx>  writes:
I've got a query that I need to squeeze as much speed out of as I can.
Hmm ... are you really sure this is being run with work_mem = 50MB?
The hash join is getting "batched", which means the executor thinks it's
working under a memory constraint significantly less than the size of
the filtered inner relation, which should be no more than a couple
megabytes according to this.

I'm not sure how much that will save, since the hashjoin seems to be
reasonably speedy anyway, but there's not much other fat to trim here.

One minor suggestion is to think whether you really need string
comparisons here or could convert that to use of an enum type.
String compares ain't cheap, especially not in non-C locales.

			regards, tom lane


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux