Thanks for sharing this thread. My suggestion of having a work_mem_stack_size is the same concept mentioned in this thread regarding having a work_mem_pool. I prefer this later term rather than the one I was using. When the work mem
pool is exhausted PostgreSQL just uses temp files for work_mem. With current statics for temp files and with a new stats on a work mem pool usage a user could fine tune memory much more precisely. It would leave the “art of memory tuning” behind. The other
added benefit is that people would have a better understanding of how work_mem is used by naturally having to explain what a work_mem_pool is and when it is drawn on. There are probably a lot of PostgreSQL instance that would run faster just by having the
confidence to increase the size of work_mem. I am sure many instances have this value set to low.
Lance
From:
Bruce Momjian <bruce@xxxxxxxxxx>
Date: Friday, April 2, 2021 at 10:07 AM
To: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
Cc: Campbell, Lance <lance@xxxxxxxxxxxx>, MichaelDBA <MichaelDBA@xxxxxxxxxxx>, SASIKUMAR Devaraj <sashikumard@xxxxxxxxx>, holger@xxxxxxxxxx <holger@xxxxxxxxxx>, pgsql-admin@xxxxxxxxxxxxxxxxxxxx <pgsql-admin@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: work_mem
On Fri, Apr 2, 2021 at 04:59:16PM +0200, Laurenz Albe wrote:
> On Fri, 2021-04-02 at 13:31 +0000, Campbell, Lance wrote:
> > It feels like there needs to be work_mem and work_mem_stack_size. When work memory is
> > needed a process “pops” a token off of a stack. When it is done processing it “puts”
> > the token back on the stack. If the stack is empty then don’t allocate memory just
> > write to disk for work_mem.
> >
> > This does two key things:
> >
> > 1) It allows for a real world understanding of how much memory is really needed on a
> > day to day basis. You can track how often a stack is empty. You can also look at the
> > number of temp files to see when work exceeds the work_mem allocation. There is no
> > “art” to setting these values. You can use logical analysis to make choices.
> >
> > 2) This also prevents out of memory issues. You are protecting yourself from extreme loads.
>
> If I get you right, you want another memory limit per session.
>
> I see the point, but then we wouldn't need "work_mem" any more, right?
> What is the point of limiting the memory per plan node if we have an
> overall limit?
>
> In practice, I have never had trouble with "work_mem". I usually follow
> my rule of thumb: max_connections * work_mem + shared_buffers < RAM
>
> While some backend may need more, many will need less. Only bitmaps, hashes
> and sorts are memory hungry.
This blog entry discusses how work_mem might be improved:
https://urldefense.com/v3/__https://momjian.us/main/blogs/pgblog/2018.html*December_10_2018__;Iw!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnmxRENHV$
--
Bruce Momjian <bruce@xxxxxxxxxx>
https://urldefense.com/v3/__https://momjian.us__;!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnvMOvGsA$
EDB
https://urldefense.com/v3/__https://enterprisedb.com__;!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnnqC21IT$
If only the physical world exists, free will is an illusion.
|