Each of the 256 requests was being processed by a php process. So, it
could
certainly be faster. But, the fact that we're seeing the db performance
degrade would seem to indicate that our application is fast enough to
punish
the db. Isn't that true?
Not necessarily. Your DB still has lots of idle CPU, so perhaps it's your
client which is getting over the top. Or you have locking problems in your
DB.
Things to test :
- vmstat on the benchmark client
- iptraf on the network link
- monitor ping times between client and server during load test
Some time ago, I made a benchmark simulating a forum. Postgres was
saturating the gigabit ethernet between server and client...
If those PHP processes run inside Apache, I'd suggest switching to
lighttpd/fastcgi, which has higher performance, and uses a limited,
controllable set of PHP processes (and therefore DB connections), which in
turn uses much less memory.
PS : try those settings :
fsync = fdatasync
wal_buffers = 64MB
walwriter_delay = 2ms
synchronous commits @ 1 s delay
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance