MirrorX <mirrorx@xxxxxxxxx> writes: > i ve seen lately more and more occurences of queries having huge planning > times (>500ms) while their execution time is very low (<10ms). at that > point, if a vacuum is executed on the table involved (without analyze) then > the planning time goes back to 'normal'. could someone elaborate on the > connection between planning time and what vacuum does? i am asking bc i > would have thought that an analyze would have resolved the issue (with > increased sampling for example) but it didnt. only vacuum does resolve it. > (the version is 9.4.x) I'm leaping to a conclusion from insufficient data here, but: what this sounds like is a known issue where the planner spends too much time trying to identify the current minimum or maximum value of a column by searching an index for that column. In principle that should be quick, but if there are a whole lot of recently-dead entries at the relevant end of the index, it's not quick. VACUUM fixes it by removing said entries. We've gone through a couple rounds of refinement to improve that situation, but the most recent one is only in v11 not prior branches: https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3 regards, tom lane