> This is not a problem with dead rows, but the index is not really > satisfying your query and the database has to look through an > indeterminate amount of rows until the 'limit 15' is satisfied. Yeah, > backwards scans are slower, especially for disk bound scans but you > also have to consider how many filter misses your have. The smoking > gun is here: > > "Index Scan Backward using changes_shareschange on changes > (cost=0.00..925150.26 rows=181997 width=98) (actual time=3.161..15.843 > rows=15 loops=1) > Filter: ((activity = ANY ('{4,5}'::integer[])) AND (mfiled >= $1))" > > When you see Filter: xyz, xyz is what each record has to be compared > against after the index pointed you to an area(s) in the heap. It's > pure luck going forwards or backwards that determines how many records > you have to look through to get 15 good ones as defined by satisfying > the filter. To prove that one way or the other you can convert your > where to a boolean returning (and bump the limit appropriately) > expression to see how many records get filtered out. > > merlin I have indexes also on activity and mfiled (both btree) - wouldn't the database use them? - Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance