Joel Benelli <joel.benelli@xxxxxxxxx> writes: > We have a Linux server 16GB RAM and HD SSD with Postgres 9.5 and > configuration: > ... > effective_cache_size = 12GB > With these settings the memory usage is between 10 ~ 13GB and when running > a dump occur errors of "out of memory" preventing new requests. By changing > the effective_cache_size setting to 6GB, the memory usage is between 8 ~ > 12GB, without errors during the dump. That seems *very* odd. The only thing effective_cache_size actually affects is the planner's opinion of the cost of indexscans [1], so that it might choose different query plans after you change the setting. But there's no obvious reason why that would lead to memory problems in one case and not the other. I'd suggest looking in the postmaster log to discover exactly which query is failing, and then manually EXPLAIN'ing that query with both settings to see what plans we are talking about. That might yield some more insight. regards, tom lane