RE: thousands of CachedPlan entry per backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



    Yes, too many cached metadata and we are thinking of a workaround to disconnect the sessions timely.

In addition, based on the dumped memory context, I have questions

   1) we found thousands of cached plan , since JDBC driver only allow max 256 cached prepared statements, how backend cache so many sql plans. If we have one function,  when application call that function will make backend to cache  every SQL statement plan in that function too?   and for table triggers, have similar caching behavior ?

  2) from  this line, we saw total 42 blocks ,215 chunks      CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,

      But from sum of it’s child level entrys,  total sum(child lines) block ,trunks show much more than “CacheMemoryContext,  is expected to see that?

 

 

Thanks,

 

James

  

   

    

 

From: Pavel Stehule <pavel.stehule@xxxxxxxxx>
Sent: Thursday, June 1, 2023 3:19 PM
To: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
Cc: James Pang (chaolpan) <chaolpan@xxxxxxxxx>; pgsql-performance@xxxxxxxxxxxxxxxxxxxx
Subject: Re: thousands of CachedPlan entry per backend

 

Hi

 

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@xxxxxxxxxxx> napsal:

On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per
> backend, from Operating system and memorycontext dump “Grand total:”, both mached.
> But from details, we found almost of entry belong to  “CacheMemoryContext”,
> from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,
> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks
>
> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),
> there are many triggers, functions in this database, and a few functions run sql by an extension
> pg_background.  We have thousands of connections and have big concern why have thousands of entrys
> of cached SQL ?  that will consume huge memory ,  anyway to limit the cached plan entry to save memory
> consumption?  Or it looks like an abnormal behavior or bug to see so many cached plan lines.

If you have thousands of connections, that's your problem.  You need effective connection pooling.
Then 40MB per backend won't be a problem at all.  Having thousands of connections will cause
other, worse, problems for you.

See for example
https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/

If you want to use functions, but don't want to benefit from plan caching, you can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".

 

The problem with too big of cached metadata can be forced by too long sessions too.

 

In this case it is good to throw a session (connect) after 1hour or maybe less.

 

Regards

 

Pavel

 


Yours,
Laurenz Albe


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux