Re: Concurrent Users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a number of medium-sized databases between 30 to 40TB that seem to perform well with some workloads. Our applications are mostly insert-intensive, and we can do several thousand inserts per second with databases in that range. Small inserts seem to be limited in performance by index updates, which need to be as efficient as possible. Indexing sequential values like serial numbers and dates work well, but indexing random numbers like UUIDs does not work well. 

It might be worth noting the number of file descriptors available to the user running your database. Postgres can burn through them quickly with the default segsize. While I've not seen Postgres die from this, it doesn't help performance if it's rapidly opening and closing many thousands of data files. On ZFS, I've also seen performance improvements by increasing the blocksize higher than the default.

I would not recommend putting all of your data in a small number of tables if you need maintenance tasks to run as fast as possible (vacuum full, dumps, restores, etc). Postgres is not great at parallelizing maintenance on individual tables. It is, however, pretty good at distributing load between multiple concurrent users performing similar types of queries. If you're only expecting 100 or so users, I don't think you'll have any problems. You will need to tune parameters in postgresql.conf to match your hardware and workload, as described in many places online.

The best advice I can give is to benchmark your schema and usage patterns. Create a database, and put as much data in it as you ever could hope to need, and then even more. Performance will not change linearly; eventually you'll hit some sort of wall. You'll want to know where that is sooner rather than later.

	- .Dustin

On Aug 27, 2013, at 1:05 AM, shyam megha <shyamnguitar@xxxxxxxxx> wrote:

> Dear Psql Team,
> 
> Hi there! I have two queries of Postgresql 9.x
> 
> 1. What is the number of concurrent users who can access the PostgreSQL 9.x server at a time? Is it necessary for me to go for Postgres Advanced Server plus or any other advanced edition 
> 
> In our company we are looking for concurrent access of geospatial data by 100 users at a time.
> 
> 2. Also is there maximum storage value that Postgres supports? We are planning to work with Terabytes of data.
> 
> 
> 
> Regards,
> Sam



-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux