>> The problem with our "cheap" connection pool is that the persistent >> connections don't seem to be available immediately after they're >> released by the previous process. pg_close doesn't seem to help the >> situation. We understand that pg_close doesn't really close a >> persistent connection, but we were hoping that it would cleanly >> release it for another client to use. Curious. > > Yeah, the persistent connects in php are kinda as dangerous as they > are useful.. Have you tried using regular connects just to compare > performance? On Linux they're not too bad, but on Windows (the pg > server that is) it's pretty horrible performance-wise. Yes we have. Regular connections are pretty slow, even when our application server is on the same box as the db server. >> We've also tried third-party connection pools and they don't seem to >> be real fast. > > What have you tried? Would pgbouncer work for you? We've tried pgbouncer. It's pretty good. Here are more details on what we're running: We have three servers: A, B, and C. All of them are on the same rack, sharing a gb switch. We have a test application (Apache bench) running on A. The test app sends 5000 requests to our application server. We can control how many requests it will send concurrently. For purposes of explanation, I'll refer to the concurrency parameter of the test server TCON. The application server is (now) running on B. It's basically Apache with the PHP5 module. And, good ol' Postgres is running on C. We have two basic configurations. The first configuration is with the application server using the "cheap" connection pooling. Like I said before, we do this by configuring Postgres to only allow 40 clients, and the application server uses a pconnect wrapper that blocks until it gets a db connection (I guess you'd call this a "polling connection pool"). We have run the first configuration using persistent and non-persistent connections. When we run it with persistent connections using a TCON of 40, Apache Bench tells us that we are processing ~100 requests per second and our CPU utilization is up to about %80. When we run it with non-persistent connections using the same TCON, we process about ~30 requests per second, and our cpu utilization is at about %30 (sort of a surprising drop). If we change TCON to 200 using the persistent connection configuration, we're only able to process ~23 per second. It seems like lots of failing connections from our pconnect wrapper are killing db performance. The second configuration is with pgBouncer. We configure pgBouncer to run on the same server as Postgres, and we configure it to allow an infinite number of incoming connections and only 40 connections to the actual Postgres db. We change the Postgres configuration to allow up to 60 clients (just to set it higher than what pgBouncer should be using). Using this configuration, with TCON set to any number >= 40, we can process ~83 requests per second. So, pgBouncer is pretty good. It doesn't appear to be as good as limiting TCON and using pconnect, but since we can't limit TCON in a production environment, we may not have a choice. Does anyone know why failed db connections would have such a drastic performance hit on the system? I suppose it matters how many connections were attempting. Maybe we're killing ourselves with a denial of service attack of sorts (hence, polling is bad). But, I'm told that we were only checking every 1/2 second (so, basically 160 processes attempting to connect every 1/2 second). I suppose 320 attempts per second could cause a lot of interrupts and context switches. I don't think we have the context switch numbers handy for all of those runs. Maybe we can get those numbers tomorrow. Anyway, thanks again for your help thus far. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance