On Tue, Sep 17, 2019 at 4:41 AM Bob Jolliffe <bobjolliffe@xxxxxxxxx> wrote:
Hi
We are trying to diagnose why postgres might be making poor decisions
regarding query plans. One theory is that it does not assume it has
the memory suggested in effective_cache_size.
We do know that max_connections is set quite high (600) when we don't
really expect more than 100. I wonder does the planner take
max_connections x work_mem into account when considering the memory it
has potentially available?
No, it doesn't try to guess how many connections might be sharing effective_cache_size. It assumes the entire thing is available to any use at any given time.
But it is only used for cases where a single query is going to be accessing blocks over and over again--it estimates that the block will still be in cache on subsequent visits. But this doesn't work for blocks visited repeatedly in different queries, either on the same connection or different ones. There is no notion that some objects might be hotter than others, other than within one query.
Cheers,
Jeff