Fwd: Postgre Eating Up Too Much RAM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I replied to this a few days ago but forgot to include the group.  It appears that increasing our server swap space has fixed our problems.  I will keep my fingers crossed.

 
> (there are currently a little over 200 active connections to the
> database):

How many cores do you have on the system? What sort of storage
systeme?

Intel Dual Xeon E5606 2133MHz
2 CPU's with 4 Cores each
32GB RAM
Hard Drive: 1.6 TB RAID10
 
What, exactly, are the symptoms of the problem? Are there

200 active connections when the problem occurs? By "active", do you
mean that there is a user connected or that they are actually running
something?

When the server goes unresponsive I am not sure what the number of connections are.  I will do more diagnostic reporting but I suspect the number of connections may be spiking for some reason and / or the usage of the BLOBs in the DB are at the heart of the problem.
 

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

> max_connections = 1000

If you want to handle a large number of clients concurrently, this is
probably the wrong way to go about it. You will probably get better
performance with a connection pool.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

We already use connection pooling.  We are in the process of putting limits on the max open connections and also changing how those connections are used to reduce the number of open connections from any particular application instance.
 
> shared_buffers = 256MB


Depending on your workload, a Linux machine with 32GB RAM should
probably have this set somewhere between 1GB and 8GB.

I will try increasing the shared_buffers.  Thanks.

A few days ago I increased the swap on the machine to 34 GB (it was 2 GB and I added 32 more).  The server now appears to be stable.  Either this change has been enough to keep things humming along well or whatever the app is doing to cause issues just hasn't occurred in the last few days.  I suspect this change is what has stabilized things.
 
> vacuum_cost_delay = 20ms

Making VACUUM less aggressive usually backfires and causes
unacceptable performance, although that might not happen for days or
weeks after you make the configuration change.

Our databases are mostly heavy reads with not a lot of writes.  We almost never do hard deletes.  That is why I put the vacuum at this level.
 

By the way, the software is called PostgreSQL. It is often shortened
to Postgres, but "Postgre" is just wrong.

Yep, my typo.


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux