Kevin Kempter <kevink@xxxxxxxxxxxxxxxxxxx> wrote: > I have a simple query against two very large tables ( > 800million > rows in theurl_hits_category_jt table and 9.2 million in the > url_hits_klk1 table ) > I get a very high overall query cost: > Hash Join (cost=296959.90..126526916.55 rows=441764338 width=8) Well, the cost is an abstraction which, if you haven't configured it otherwise, equals the estimated time to return a tuple in a sequential scan. This plan is taking advantage of memory to join these two large tables and return 441 million result rows in the time it would take to read 126 million rows. That doesn't sound like an unreasonable estimate to me. Did you think there should be a faster plan for this query, or is the large number for the estimated cost worrying you? -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance