Adrian Klaver <adrian.klaver@xxxxxxxxxxx> writes: > On 6/4/20 9:43 AM, Tom Lane wrote: >> It's possible that the index had bloated to the point where the planner >> thought it was cheaper to use a seqscan. Did you make a note of the >> cost estimates for the different plans? > I missed the part where the OP pointed to a SO question. In that > question where links to explain.depesz.com output. Ah, I didn't bother to chase that link either. So the cost estimates are only a fraction of a percent apart, making it unsurprising for not-so-large changes in the index size to cause a flip in the apparently-cheapest plan. The real question then is why the cost estimates aren't actually modeling the real execution times very well; and I'd venture that that question boils down to why is this rowcount estimate so far off: > -> Parallel Seq Scan on oscar mike_three > (cost=0.000..1934568.500 rows=2385585 width=3141) (actual > time=159.800..158018.961 rows=23586 loops=3) > Filter: (four AND (NOT bravo) AND (zulu <= > 'echo'::timestamp without time zone)) > Rows Removed by Filter: 8610174 We're not going to be able to answer that if the OP doesn't wish to decloak his data a bit more ... but a reasonable guess is that those filter conditions are correlated. With late-model Postgres you might be able to improve matters by creating extended statistics for this table. regards, tom lane