24.11.10 02:11, Craig Ringer ÐÐÐÐÑÐÐ(ÐÐ):
On 11/22/2010 11:38 PM, Ivan Voras wrote:
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Voras<ivoras@xxxxxxxxxxx> wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling. :-)
I agree, connection pooling will get rid of the symptom. But not
the underlying problem. I'm not saying that having 1000s of
connections to the database is a particularly good design, only
that there shouldn't be a sharp decline in performance when it
does happen. Ideally, the performance should remain the same as it
was at its peek.
Well, I suggested that we add an admission control[1] mechanism,
It looks like a hack (and one which is already implemented by connection
pool software); the underlying problem should be addressed.
My (poor) understanding is that addressing the underlying problem
would require a massive restructure of postgresql to separate
"connection and session state" from "executor and backend". Idle
connections wouldn't require a backend to sit around unused but
participating in all-backends synchronization and signalling. Active
connections over a configured maximum concurrency limit would queue
for access to a backend rather than fighting it out for resources at
the OS level.
The trouble is that this would be an *enormous* rewrite of the
codebase, and would still only solve part of the problem. See the
prior discussion on in-server connection pooling and admission control.
Hello.
IMHO the main problem is not a backend sitting and doing nothing, but
multiple backends trying to do their work. So, as for me, the simplest
option that will make most people happy would be to have a limit
(waitable semaphore) on backends actively executing the query. Such a
limit can even be automatically detected based on number of CPUs
(simple) and spindels (not sure if simple, but some default can be
used). Idle (or waiting for a lock) backend consumes little resources.
If one want to reduce resource usage for such a backends, he can
introduce external pooling, but such a simple limit would make me happy
(e.g. having max_active_connections=1000, max_active_queries=20).
The main Q here, is how much resources can take a backend that is
waiting for a lock. Is locking done at the query start? Or it may go
into wait while consumed much of work_mem. In the second case, the limit
won't be work_mem limit, but will still prevent much contention.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance