I’am already using pgbouncer as a connection pooler and default_pool_size = 96.
i checked “show pools”, the max_wait was as high as 70 or more while INSERT statement duration is about 3000ms in postgres log.
These numbers increase over time.
I’ll try RAID with more SSDs.
Thank you for your response.
2017년 9월 5일 (화) 오전 3:15, Scott Marlowe <scott.marlowe@xxxxxxxxx>님이 작성:
On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <dntjdals0513@xxxxxxxxx> wrote:
> Hi team,
>
> I'm trying to configure postgres and pgbouncer to handle many inserts from
> many connections.
>
> Here's some details about what i want to achieve :
>
> We have more than 3000 client connections, and my server program forks
> backend process for each client connections.
This is a terrible configuration for any kind of performance. Under
load all 3,000 connections can quickly swamp your server resulting in
it slowing to a crawl.
Get a connection pooler involved. I suggest pgbouncer unless you have
very odd pooling needs. It's easy, small, and fast. Funnel those 3,000
connections down to <100 if you can. It will make a huge difference in
performance and reliability.
> System information :
> PGBouncer 1.7.2.
> PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).
> Kernel version 2.6.32-696.10.1.el6.x86_64
> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.
> 32GB ECC/REG-Buffered RAM.
> 128GB Samsung 840 evo SSD.
If it's still slow after connection pooling is setup, then look at
throwing more SSDs at the problem. If you're using a HW RAID
controller, turn off caching with SSDs unless you can prove it's
faster with it. It almost never is.