(first five iterations)Index Only Scan using test_pkey on test (cost=0.29..476.29 rows=9999 width=4) (actual time=0.058..2.439 rows=10000 loops=1)Index Cond: (col1 = 'xyz'::text)Filter: (col2 ~~ '%'::text)Heap Fetches: 0Execution time: 2.957 ms(iterations 6+)Sort (cost=205.41..205.54 rows=50 width=4) (actual time=104.986..105.784 rows=10000 loops=1)Sort Key: col2Sort Method: quicksort Memory: 853kB-> Seq Scan on test (cost=0.00..204.00 rows=50 width=4) (actual time=0.014..2.100 rows=10000 loops=1)Filter: ((col2 ~~ $2) AND (col1 = $1))Execution time: 106.282 ms
Can you convince the planner to use the IOS for the generic plan too; by setting "enable_seqscan=false" and maybe "enable_sort=false"?
I get what Tom's saying generally but I'm surprised it would throw away an IOS plan for a sequential scan + sort when it thinks there are fewer rows that will actually match. I've generally read that the closer to the whole table you expect to retrieve the more advantageous a sequential scan is but this exhibits the opposite behavior.
IOW, I'm wondering why of the various generic plans why this one is considered the cheapest in the first place. Is the measure "cost per row" an invalid metric to consider - since in the above the specific plan is 0.048 compared to 0.244 for the generic one?
Or, should we at least add memory of actual executions for a given set of bindings? It would probably be acceptable for a pattern like: "5 4 5 6 4 20 4 5 6" to show up - we try the specific plan 5 times, then we try the generic one and see that we got worse, and so we go back to the specific plan. Assuming the actual supplied inputs don't change as is the case in the example.
David J.