bsreejithin <bsreejithin@xxxxxxxxx> wrote: > What I posted is about a new setup that's going to come > up..Discussions are on whether to setup DB cluster to handle 1000 > concurrent users. I previously worked for Wisconsin Courts, where we had a single server which handled about 3000 web users collectively generating hundreds of web hits per second generating thousands of queries per second, while at the same time functioning as a replication target from 80 sources sending about 20 transactions per second which modified data (many having a large number of DML statements per transaction) against a 3 TB database. The same machine also hosted a transaction repository for all modifications to the database, indexed for audit reports and ad hoc queries; that was another 3 TB. Each of these was running on a 40-drive RAID. Shortly before I left we upgraded from a machine with 16 cores and 256 GB RAM to one with 32 cores and 512 GB RAM, because there is constant growth in both database size and load. Performance was still good on the smaller machine, but monitoring showed we were approaching saturation. We had started to see some performance degradation on the old machine, but were able to buy time by reducing the size of the web connection pool (in the Java application code) from 65 to 35. Testing different connection pool sizes showed that pool size to be optimal for our workload on that machine; your ideal pool size can only be determined through testing. You can poke around in this application here, if you like: http://wcca.wicourts.gov/ -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance