On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <dntjdals0513@xxxxxxxxx> wrote: > Hi team, > > I'm trying to configure postgres and pgbouncer to handle many inserts from > many connections. > > Here's some details about what i want to achieve : > > We have more than 3000 client connections, and my server program forks > backend process for each client connections. This is a terrible configuration for any kind of performance. Under load all 3,000 connections can quickly swamp your server resulting in it slowing to a crawl. Get a connection pooler involved. I suggest pgbouncer unless you have very odd pooling needs. It's easy, small, and fast. Funnel those 3,000 connections down to <100 if you can. It will make a huge difference in performance and reliability. > System information : > PGBouncer 1.7.2. > PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 > 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final). > Kernel version 2.6.32-696.10.1.el6.x86_64 > Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor. > 32GB ECC/REG-Buffered RAM. > 128GB Samsung 840 evo SSD. If it's still slow after connection pooling is setup, then look at throwing more SSDs at the problem. If you're using a HW RAID controller, turn off caching with SSDs unless you can prove it's faster with it. It almost never is. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance