Duncan Whitham <dwhitham@xxxxxxxxx> writes: > We now only need 1 environment as we can replicate the performance problem > on a copy of live – snapshot/restore from AWS of live. We now have a vacuum > analyse running every night on the 3 tables in question on live – to > eliminate bloat and inaccurate stats as the root of the problem. Hmm, doesn't seem like that's getting the job done. I can see at least one serious misestimate in these plans: > -> Bitmap Heap Scan on alf_node_aspects aspect_1 > (cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243 > rows=4909 loops=1) > Recheck Cond: (qname_id = 251) It doesn't seem to me that such a simple condition ought to be misestimated by a factor of 30, so either you need to crank up the stats target for this column or you need to analyze the table more often. The other rowcount estimates don't seem so awful, but this one is contributing to the planner thinking that "SubPlan 1" is going to be very expensive, which probably accounts for it trying to avoid what's actually a cheap plan. regards, tom lane