On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote: (Regarding http://explain.depesz.com/s/4MWG, wrote) > > But I am curious about how the cost estimate for the primary key look > up is arrived at: > > Index Scan using cons_pe_primary_key on position_effect > (cost=0.00..42.96 rows=1 width=16) > > There should be a random page for the index leaf page, and a random > page for the heap page. Since you set random_page_cost to 2, that > comes up to 4. Then there would be some almost negligible CPU costs. > Where the heck is the extra 38 cost coming from? I now see where the cost is coming from. In commit 21a39de5809 (first appearing in 9.2) the "fudge factor" cost estimate for large indexes was increased by about 10 fold, which really hits this index hard. This was fixed in commit bf01e34b556 "Tweak genericcostestimate's fudge factor for index size", by changing it to use the log of the index size. But that commit probably won't be shipped until 9.3. I'm not sure that this change would fix your problem, because it might also change the costs of the alternative plans in a way that neutralizes things. But I suspect it would fix it. Of course, a correct estimate of the join size would also fix it--you have kind of a perfect storm here. Cheers, Jeff -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance