Brian Fehrle <brianf@xxxxxxxxxxxxxxxxxxx> writes: > I've got a query that I need to squeeze as much speed out of as I can. Hmm ... are you really sure this is being run with work_mem = 50MB? The hash join is getting "batched", which means the executor thinks it's working under a memory constraint significantly less than the size of the filtered inner relation, which should be no more than a couple megabytes according to this. I'm not sure how much that will save, since the hashjoin seems to be reasonably speedy anyway, but there's not much other fat to trim here. One minor suggestion is to think whether you really need string comparisons here or could convert that to use of an enum type. String compares ain't cheap, especially not in non-C locales. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance