Search Postgresql Archives

Re: work_mem RAM Accounting in PostgreSQL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi @Laurenz Albe & PG Community, 
Highly appreciate your response. But I have some additional questions (inline)

On Thu, Nov 14, 2024 at 4:40 PM Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote:
On Wed, 2024-11-13 at 21:09 +0200, Alexandru Lazarev wrote:
> I have some questions regarding how the "work_mem" parameter affects the overall RAM
> usage of PostgreSQL processes within a physical host or container.
>
> Each backend process during SQL execution may allocate N * "work_mem" simultaneously.
> For example, if "work_mem" is set to 32MB and N=5 (i.e. 5 simultaneous and/or sequential
> ORDER and hash operations), and the initial RAM usage (RSS - Resident Set Size) of the
> backend is 10MB, I would expect the backend process to use 160MB (32MB * 5) + 10MB,
> resulting in a total RAM usage of 170MB.

The limit for a hash is hash_mem_multiplier * work_mem.

Yes, I know, I considered it above in "N".
 

> My questions are as follows:
>
> 1. What happens to the allocated "work_mem" after the execution of query nodes? Are
>    these memory allocations freed?

Yes.

> 2. If they are freed, do they remain in the RSS of the PostgreSQL backend?

They may, because the C library can choose not to actually free all the memory,
but retain some to serve future malloc() requests more efficiently.

This part is important for me to understand (my apologies, I am not a C programmer and for me it is difficult reading PG sources :)): I wanted to understand if in this part there isn't some kind of allocated memory pooling in postgres. So, since some memory is freed then it is up to C Library and underlying OS if it will be returned back to OS or will stay somehow reserved - did I get it correctly? If so, then most probably this "reserved" memory should be reclaimed under memory pressure conditions (other backends processes try greedely allocate big chunks of memory)?
 

> 3. From various sources, I understand that these allocations are freed after
>    each node execution due to memory contexts, but they might remain in some sort
>    of backend memory pool for future reuse. Is this correct?

I am not sure what you mean, but perhaps what I wrote above.

I repeated myself, so Yes, You wrote above. 
 

> 4. If so, will this memory be accounted for as used RAM on my Linux/Container
>    system after the backend returns to an idle state (e.g., connection pooling)?

Certainly.

> Additionally: If the above is true, and my PostgreSQL host or container is limited
> to 16GB of RAM, what would happen if I have 100 pooled connections, each gradually
> allocating those 160MB? Will this memory be reclaimed (if I understood it correctly
> as a kind of inactive anon mem), or will the OOM Killer be triggered at some point
> (because it is real allocated memory)?

The backends won't keep that much memory allocated, so you need not worry.

Let me reformulate the use-case a bit differently - I would highly appreciate any community inputs: 
1) Let say 60 connections did some intensive memory consuming operations, each one allocating up to 200MB of work_mem, then they finished and returned to 'idle' state. 
2) After that the rest of 40 connections starting doing "work_mem" consuming operations in parallel, each one allocating up to 300MB, then will N*"work_mem" allocated RAM from step#1 be reclaimed at point of step#2 when multiple backend need to allocate aggressively K*"wor_mem" memory?

Thank You
 

100 connections are a lot.  With efficient pooling, you could have fewer connections
and use your resources more efficiently.

Yours,
Laurenz Albe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux