Re: merge join killing performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 19, 2010 at 2:27 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
> On Wed, May 19, 2010 at 10:53 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
>> Matthew Wakeling <matthew@xxxxxxxxxxx> writes:
>>> On Tue, 18 May 2010, Scott Marlowe wrote:
>>>> Aggregate  (cost=902.41..902.42 rows=1 width=4)
>>>>     ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)
>>>>         Merge Cond: (f.eid = ev.eid)
>>>>         ->  Index Scan using files_eid_idx on files f
>>>>         (cost=0.00..157830.39 rows=3769434 width=8)
>>
>>> Okay, that's weird. How is the cost of the merge join only 902, when the
>>> cost of one of the branches 157830, when there is no LIMIT?
>>
>> It's apparently estimating (wrongly) that the merge join won't have to
>> scan very much of "files" before it can stop because it finds an eid
>> value larger than any eid in the other table.  So the issue here is an
>> inexact stats value for the max eid.
>
> I changed stats target to 1000 for that field and still get the bad plan.

And of course ran analyze across the table...

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux