Search Postgresql Archives

Re: Streaming Replica Master-Salve Config.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 5 Aug 2016 12:43:43 -0700
John R Pierce <pierce@xxxxxxxxxxxx> wrote:

> On 8/4/2016 9:15 AM, Eduardo Morras wrote:
> > If you set max_connections too high, those connections will
> > compete/figth for same resources, CPU processing, I/O to disks,
> > Memory and caches, Locks, and postgres will spend more time
> > managing the resources than doing real work. Believe me (or us) set
> > it as we say and use a bouncer like pgbouncer. It can run on the
> > same server.
> 
> idle connections only use a small amount of memory, a process, a
> socket, and some file handles.    when you have multiple databases,
> its impossible to share a connection pool across them.
> 
> the OP is talking about having 350 'tenants' each with their own 
> database and user on a single server.

No, I was answering to Periko, the mail you cite is from Moreno Andreo,
which I c&p and indent here:

> Il 04/08/2016 18:15, Eduardo Morras ha scritto:
>> [...]
>> a) As others said, max_connections = 200 is too high. Set it at your
>> number of cores (I use number of cores -1)
> Excuse me for crossthreading, but I have to make things clearer to me.
> That's one of the things I feel hard to understand how to approach in
> my architecture.
> My server has 350 DB with 350 users, everyone with its DB. Every user 
> has a "persistent" connection (used to replicate with rubyrep) and
> some "burst" connections to connect to colleagues DB when necessary.
> I'm going to split it across 2 servers, but it doesn't change things.
> Even with pgbouncer, how can I manage having all these replication 
> connections (plus "burst" connections) with pgbouncer and a low
> (<100) max_connections? Is it even possible?
> Thanks
> Moreno.

He asks for a different scenario, with multiuser & multidatabase.

> your 1 connection per core suggestion is ludicrious for this 
> scenario.     in many database applications, most connections are
> idle most of the time.   sure you don't want much over about 2-4X
> your cpu thread count actually active doing queries at the same time
> if you want the max transaction/second aggregate throughput, but you
> can still get acceptable performance several times higher than that,
> depending on the workload, in my benchmarks the aggregate TPS rolls
> off fairly slowly for quite a ways past the 2-4 connections per
> hardware thread or core level, at least doing simple OLTP stuff on a
> high concurrency storage system (lots of fast disks in raid10)

Yes, for this scenario, where multiple users/apps has its own database
(A*(1:1)) or various (A*(1:D)) you are right. The OP, Periko, asked for
a pentaho setup of type (A*(C:1)) where pg_bouncer could help and OLAP
queries, which have cpu/io bottleneck.

A*(C:D)
A = number of users/clients/Apps
C = number of Connections per A
D = number of Databases in server
a = small A value (lower than a "normal" threshold/value)
c = small C value (lower than a "normal" threshold/value)
d = small D value (lower than a "normal" threshold/value)


> -- 
> john r pierce, recycling bits in santa cruz

---   ---
Eduardo Morras <emorrasg@xxxxxxxx>


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux