On Tue, Jan 5, 2010 at 4:33 PM, Brian Cox <brian.cox@xxxxxx> wrote: > comparing this to the 1st explain foo output shows some minor differences in > row estimates -- but nothing, I assume, that could explain the huge time > difference. Of course, the 1st plan may not (and probably? wasn't) the plan > that was used to take 124M ms. > > Any thoughts on how to avoid this? The incorrect row estimates can only foul up the plan; they can't directly make anything slow. Comparing the two plans line by line, the only difference I see is the fast plan has: -> Seq Scan on ts_stats_transetgroup_user_weekly b (cost=0.00..23787.37 rows=89590 width=24) (actual time=0.040..295.414 rows=89758 loops=1) Filter: ((ts_interval_start_time >= '2009-12-28 00:00:00-08'::timestamp with time zone) AND (ts_interval_start_time < '2010-01-04 00:00:00-08'::timestamp with time zone)) ...while the slow one has: Index Scan using ts_stats_transetgroup_user_weekly_starttimeindex on ts_stats_transetgroup_user_weekly b (cost=0.00..1301.21 rows=898 width=24) Index Cond: ((ts_interval_start_time >= $3) AND (ts_interval_start_time < $4)) So it looks like using that index to fetch the data is a LOT slower than just scanning the whole table. In terms of fixing this problem, I have two ideas: - If you don't have any queries where this index makes things faster, then you can just drop the index. - If you have other queries where this index helps (even though it is hurting this one), then you're going to have to find a way to execute the query without using bound parameters - i.e. with the actual values in there instead of $1 through $4. That will allow the planner to see that the index scan is a loser because it will see that there are a lot of rows in the specified range of ts_interval_start_times. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance