Re: max_connections and shared_buffers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/2/07, Anoo Sivadasan Pillai <aspillai@xxxxxxxxx> wrote:
>
> Hi,
>
> 1)      I saw a comment from experts-exchage  regarding shared_buffers,
> where max_connections were 600
>
> "2000 shared buffers were for 40 connections
>  For 600 connections it looks more like 30000 shared buffers - to prevent
> weekly slowdown.
>  i.e 8KB*30000 = 240MB
>
>  16M work_mem * 600 = 9600MB maximum when everyone is connected. "
>
> Can anybody explain the logic behind the calculation?

those two things are not really directly related.

The minimum number of shared buffers determined by the max connections
is just that.  A minimum.  Not the recommended value, which can often
be much higher, especially on a large machine with lots of RAM.

the 16M work mem * 600 has to do with the fact that if you had 600
clients connected, and they all ran a query with 1 sort (a query can
have > 1 sort by the way) then it would require 600*sort_mem (now
work_mem) memory for the server to handle all these sorts, on top of
the memory being used for other things.  sort_mem (now work_mem) is
NOT allocated from already allocated memory, like shared buffers, it
is allocated new from free memory.  Using up too much of your free
memory will make your server start swapping and slow down to a crawl.

> Why 30000 shared buffers is suggested for 600 connections.

Not sure.  We don't have the context here.  For that load, on that
machine, that's what they needed.

> Postgresql help says "This setting must be at least 128 kilobytes and at
> least 16 kilobytes times max_connections."

Right.  2 blocks (8k each) for each connection is the minimum.

> 2)  I want to set  max_connections=1024
>
> Can anybody help to suggest a proper value for shared buffers for the
> settings ( if no other settings are counted )

Why do you want to set max connections to 1024?  I would strongly
suggest using some kind of connection pooling rather than trying to
run 1k connections at once.

If you need to have 100 or so connections active at once, setup a
connection pool (pgpool or pgbouncer or java connection pooling etc)
for that many connections to the db, and use that.  Connections aren't
free, they require some memory and some interaction with the other
backends, and they WILL slow down your db server unnecessarily.

And please, if you can, shorten your sig.  It's way too long.  I know,
some half baked lawyer somewhere in the company told you you have to
do it, but for a public mailing list it seems kinda overblown.

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux