Thanks a lot!
I have understand this now.
And the following:
>In
general you only want to have as many actual connections to PG
>as you have CPU cores in your database server.
This made me consider the appropriate value for max_conennections.
This might be another topic I think.
I am wondering how to decide the appropriate numbers of pg processes serving client.
I think if there are some activities use disk heavily,there might be some different.
To make it simple, I consider the following scenario:
Consider there are some batch programs communicating with PG every day.
Maybe I can collect the cpu run time compared with the whole time the program run for a period.
If cpu runtime is closer to the whole runtime of program, I can say that cpus or cpu cores are fully used.
If cpu runtime is really smaller too much than runtime of program, I can say tha cpus or cpu cores are not busy.
So I can increase the value of max_connections in order to fully use cpus' ability.
But on the other hand, I might need to replace disks with some more high-speed ones.
Is there any common calculation methods for deciding the max_connections value?
Thanks
* 高健 (luckyjackgao@xxxxxxxxx) wrote:Prepared statements are session-local.
> So I can draw a conclusion:
>
> Prepared statement is only for use in the same session at which it has
> been executed.
Correct.
> It can not be shared via multiple sessions.
Yes. If you're using a connection pooling system of some kind, it can
> That is, when in some special situations ,
>
> if I have to use mulitple connections between client applicaiton and
> postgresql database,
>
> I must consider this point if I want to get benifit from prepared
> statements.
be useful to have it automatically set up all of your prepared
statements when it first connects to a new backend. If it opens new
backend connections preemptively and ensures it's always got "spares"
available, this can be done with minimal impact to the application. Or,
of course, you can simply have your application check if a given
statement has been prepared yet and, if not, prepare it before executing
it. That adds an extra round-trip to the database, of course, but you
could also cache and keep local the set of statements that you know
you've prepared for a given database connection too.
In general, having a good connection pooler of some kind is really
critical if you're going to have a lot of application threads talking to
PG. In general you only want to have as many actual connections to PG
as you have CPU cores in your database server.
It doesn't have anything to do with MVCC. afaik, the reason it's
> So I am now thinking about the reason that prepared statement can not
> cross over sessions.
>
> Maybe it is because of MVCC control? So in order to make it simple, the
> prepared statement is in one session range?
implemented this way is because it was much simpler to implement as it
doesn't require any shared memory access or coordination between
backends, it's not hard to work around, and isn't a terribly often
requested feature.
There's also a lot of backend parameters which can change what a single
'prepare' ends up doing- search_path, constraint_exclusion, other
planner tunables, all of which need to be the same across all of the
sessions for the same plan to be the 'correct' one in all the backends,
not to mention roles and permissisons of the users involved.
Simply put, it'd be quite a bit of work, would probably make things
slower due to the cross-backend communication required, and would really
only work for these specific "my application uses all the same prepared
statements and always connects as the same user and with all the same
parameters all the time" cases.
Thanks,
Stephen