On Tue, Oct 27, 2009 at 11:08 AM, <jesper@xxxxxxxx> wrote: > In my example the seq-scan evaulates 50K tuples and the heap-scan 40K. > The question is why does the "per-tuple" evaluation become that much more > expensive (x7.5)[1] on the seq-scan than on the index-scan, when the > complete dataset indeed is in memory? [ ... thinks a little more ... ] The bitmap index scan returns a TID bitmap. From a quick look at nodeBitmapHeapScan.c, it appears that the recheck cond only gets evaluated for those portions of the TID bitmap that are lossy. So I'm guessing what may be happening here is that although the bitmap heap scan is returning 40K rows, it's doing very few (possibly no) qual evaluations, and mostly just checking tuple visibility. >> If your whole database fits in RAM, you could try changing your >> seq_page_cost and random_page_cost variables from the default values >> of 1 and 4 to something around 0.05, or maybe even 0.01, and see >> whether that helps. > > This is about planning the query. We're talking actual runtimes here. Sorry, I assumed you were trying to get the planner to pick the faster plan. If not, never mind. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance