Re: Problems with PG 9.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dhruv Shukla <dhruvshukla82@xxxxxxxxx> wrote:
> Kevin Grittner <kgrittn@xxxxxxxxx> wrote:
>> Dhruv Shukla <dhruvshukla82@xxxxxxxxx> wrote:

>>> Other informational details about configurations are:
>>> shared_buffers = 80GB
>>> temp_buffers = 2GB
>>> work_mem = 2GB
>>> maintenance_work_mem = 16GB

>> How much RAM is on the machine (or VM)?

> Currently we have a max connection setting for 1000 connections.

The temp_buffers setting is a limit *per connection*, and once RAM
for a temporary table is allocated to a connection it is never
released; so even when all connections are idle they could be
setting aside 2TG of RAM for possible use for caching temporary
tables.

The work_mem setting is an attempt to limit RAM per node of an
executing query; one connection can create many allocations of the
size set for work_mem.  Since not all queries have nodes that
require such allocations, and not all connections are necessarily
active at the same time, a general rule of thumb is to allow for
one work_mem allocation per connection allowed by max_connections.
So these settings can easily result in another 2TB of allocations,
beyond the temp_buffers mentioned above.

A high shared_buffers setting can result in "write gluts" at the OS
level when a high percentage of that memory becomes dirty and is
dumped to the OS in a short time.  This can result in the OS
appearing to "lock up" for as long as it takes it to flush all of
the dirty data to disk.  I don't know what your system is capable
of, but most database servers I've seen fall between 20MB/second
and 200MB/second.  That would correspond to apparent stalls lasting
between 6.8 minutes and 1.1 hours.  By tweaking the bgwriter
settings and the OS dirty settings you can drastically reduce this,
but I don't think you've indicated having done that, so 80GB can be
expected to cause apparent lockups of those durations.

A high shared_buffers setting makes you more vulnerable to long
stalls because of transparent huge page operations of the OS.

> And RAM on server is 384GB RAM.

And overcommitting RAM by a factor of more than 10x is not going to
be pretty in any event.

If I could not reduce max_connections, I would set work_mem to no
more than 100MB and temp_buffers to no more than 50MB.  I would
drop maintenance_work_mem to 2GB.  I would probably drastically
reduce shared_buffers and would most certainly make autovacuum and
bgwriter more aggressive than the default.

If you make those changes and still see a problem, only then is it
worth looking at other possible causes.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux