Search Postgresql Archives

Re: Out of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 28, 2008 at 12:38 PM, Alex Adriaanse
<alex@xxxxxxxxxxxxxxxxxxx> wrote:
> I have a client that experienced several Out Of Memory errors a few
>  weeks ago (March 10 & 11), and I'd like to figure out the cause.  In the
>  logs it's showing that they were getting out of memory errors for about
>  0.5-1 hour, after which one of the processes would crash and take the
>  whole database down.  After they restarted the server it would
>  eventually start giving out of memory messages and crash again.  This
>  happened a total of five times over a 24 hour period.  After that we did
>  not see these errors again.  They did upgrade to 8.1.11 on the 14th, and
>  have also moved some of the databases to different servers afterwards.
>
>  First some background information:
>
>  Software (at the time of the memory errors): CentOS 4.5 (x86_64) running
>  its 2.6.9-55.ELsmp Linux kernel, PostgreSQL 8.1.9 (from RPMs provided on
>  the PostgreSQL web site: postgresql-8.1.9-1PGDG.x86_64).
>
>  Hardware: 4 dual-core Opterons.  16GB physical RAM, 2GB swap.
>
>  Database: they use persistent connections, and usually have around 1000
>  open database connections.  The vast majority of those are usually
>  idle.  They do run a lot of queries though.  The total size of the
>  databases in this cluster is 36GB, with the largest database being 21GB,
>  and the largest table being 2.5GB (having 20 million tuples).
>
>  Highlights of postgresql.conf settings:
>  max_connections = 2000
>  shared_buffers = 120000
>  work_mem = 4096

SNIP

Just because you can set max_connections to 2000 doesn't mean it's a
good idea.  If your client needs 1000 persistent connections, then put
a connection pooler between your app (I'm guessing php since it
operates this way) and the database.

Running 1000 connections is a LOT, and you need 1000 active
connections, then you're likely gonna need a bigger machine than one
with 8 cores and 16 gig of rams.  OTOH, if you are actively servicing
less than 10% of those connections at a time, then you're wasting
memory on the number of backends that are started up and doing
nothing.  each one consumes some amount of memory on its own, usually
in the 5 to 10 meg range, just to sit there and do nothing.

Plus you've got issues with thundering herd type situations that can
show up as you increase connections.

Pooling is the answer here.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux