Mike Broers <mbroers@xxxxxxxxx> writes: > If Im reading this correctly postgres thinks the partition will return 6.5 > million matching rows but actually comes back with 162k. Is this a case > where something is wrong with the analyze job? You've got a lot of scans there that're using conditions like > │ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527828.05 rows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1) > │ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) > │ Rows Removed by Filter: 12172186 While I'd expect the planner to be pretty solid on estimating the validation_status_code condition, it's not going to have any idea about that JSON field test. That's apparently very selective, but you're just getting a default estimate, which is not going to think that a NOT NULL test will exclude lots of rows. One thing you could consider doing about this is creating an index on (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics about that expression. Even if the index weren't actually used in the plan, this might improve the estimates and the resulting planning choices enough to make it worth maintaining such an index. Or you could think about pulling that field out and storing it on its own. JSON columns are great for storing random unstructured data, but they are less great when you want to do relational-ish things on subfields. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance