Yes I can create a simpler version that exhibits the problem: postgres_bug_simpler.sql <http://postgresql.1045698.n5.nabble.com/file/n5806320/postgres_bug_simpler.sql> This only now involves one smaller table 60K rows, and a linked table with 20M rows. I tried with 6K and 1M but could not get problem to occur. Both are now unchanging in size. The smaller table gets updated frequently, and then starts exhibiting the bad query plan, it seems especially after the 2nd auto vacuum and auto analyze. When the dead_rows goes to zero in the stats the live_tup can stay at an huge factor larger than the table really is for some time. In my system the smaller table that is updated frequently grows only slightly if at all. I never want a table scan to happen of the big table, but even with enable_seq_scan=false set in functions that query these tables I can get the bad query plan. Would it be possible to have a setting on a table that gave an expression for determining the table size? IE For key highly updated tables I could set and maintain the meta-data for the size table and even shape of the data. Then for these tables autovac would not need to make the effort of having to estimate size. regards Tim -- View this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806320.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.