Search Postgresql Archives

Re: PostgreSQL 8.4.8 bringing my website down every evening

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 25.5.2011 21:54, Alexander Farber napsal(a):
> Thank you for your replies,
> 
> I've reverted httpd.conf to
> 
>   StartServers       8
>   MinSpareServers    5
>   MaxSpareServers   20
>   ServerLimit      256
>   MaxClients       256
> 
> and have changed postgresql.conf to:
> 
>   shared_buffers = 512MB
>   # for Apache + my game daemon + cron jobs
>   max_connections = 260
> 
> Do you think I need to reconfigure CentOS 5.6
> for the bigger shared memory too or
> will it adapt by itself?

If the database started up fine, then you probably don't need to mess
with the kernel parameters. There are two limits here

SHMMAX - max size of a single memory segment
SHMALL - max size of all the memory segments (sum)

So if you've increased the shared buffers and the database starts fine,
there's a slight change that something else will fail because it needs
it's own segment and the sum exceeds SHMALL.

But if everything works fine, it's probably fine. When something fails
you'll know where to look.

You can see the list of segments using "ipcs -m"

> I'm still studying the docs.
> 
> Also I've installed the pgbouncer package and
> will read on it too, but I already wonder what is
> its behaviour if configured for 100 connections
> and a 101st comes in?

Say you have "max_client_conn = 2" and "pool_size = 1", and then three
clients come.

client 1: connects to pgbouncer, gets a db connection from the pool,
          starts a transaction and works

client 2: connects to pgbouncer, asks for a db connection but has to
          wait until client 1 finishes (because there's only 1
          connection in the pool)

client 3: can't connect to the pgbouncer, get's "ERROR:  no more
          connections allowed" (there's max_client_conn = 2)

So in your case the 101st client is rejected. But those connections are
much cheaper (compared to the real db connections), so you may create
more of them. So if you have ServerLimit=256, you may do set
max_client_conn to 256 to handle the peak.

> ; total number of clients that can connect
> max_client_conn = 100
> default_pool_size = 20

BTW as Steve Crawford already pointed out, it's impossible to give
reliable advices without more information. So it may happen that this
won't fix the actual cause.

You need to find out whether the number of connections really is the
problem. Maybe there's some poorly performing SQL that causes all this,
and fixing that one problem might solve all this.

Or maybe there's some concurrency issue (e.g. all the sessions updating
the same row). In that case the number of connections is rather a
symptom than a cause.

Are those connections idle? What does a vmstat / iostat show? Have you
enabled logging of slow queries?

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux