Connection pooling for a mixture of lightweight and heavyweight jobs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a question that may be related to connection pooling.

We create a bunch of high-performance lightweight Postgres clients that serve up images (via mod_perl and Apache::DBI).  We have roughly ten web sites, with ten mod_perl instances each, so we always have around 100 Postgres backends sitting around all the time waiting.  When a lightweight request comes in, it's a single query on an primary key with no joins, so it's very fast.

We also have a very heavyweight process (our primary search technology) that can take many seconds, even minutes, to do a search and generate a web page.

The lightweight backends are mostly idle, but when a heavyweight search finishes, it causes a burst on the lightweight backends, which must be very fast. (They provide all of the images in the results page.)

This mixture seems to make it hard to configure Postgres with the right amount of memory and such.  The primary query needs some elbow room to do its work, but the lightweight queries all get the same resources.

I figured that having these lightweight Postgres backends sitting around was harmless -- they allocate shared memory and other resources, but they never use them, so what's the harm?  But recent discussions about connection pooling seem to suggest otherwise, that merely having 100 backends sitting around might be a problem.

Craig

--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux