Re: Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mehmet,

On Wed, 31 Jan 2024 at 13:33, Mehmet COKCEVIK <mehmet@xxxxxxxxxxxxxxx> wrote:
Hi,
We want to work with PostgreSQL in our new project. I need your opinion on the best way to create a database.

First of all, congratulations on your decision to use PostgreSQL for your new project. :)
 
Description of our Project:
It will be in Client/Server Architecture. Windows Application users will access the server as clients and they are all in different locations. There will be a simple ERP system that will perform CRUD transactions and report them.

I hope you are not thinking of keeping business logic on the application side and querying the database from different locations. If you treat the database as a regular application's database and run multiple DML's for each request through the internet, performance of the application will be horrible due to latency between the application and the database. In case you plan to use such a model, the best approach would be to decrease the number of queries as much as possible, and achieve multiple operations by a single request, instead of reading from multiple tables, doing some calculations, writing back something to the database etc. I would move the logic to the database side as much as possible and do function/procedure calls, or have an application nearby the database and make clients' applications interact with it. So, the business logic would still be in an application and close to the database.
 
We are considering connecting to the Embarcadero Firedac dataset. We can also connect clients with PosgreRestAPI.
Our number of clients can be between 5k-20K.
We have a maximum of 200 tables consisting of invoice, order, customer, bank and stock information. I can create a second Postgre SQL for reporting if necessary.

This is an interesting point. Because, if you plan to have 20k clients, you should also be planning high availability, backups, replications etc. Serving 20k clients with a standalone server would not be something I would like to involve :)
 
Question 1 :
Should we install PostgreSQL on Windows server operating system or Linux operating system?

My personal opinion, this is not even a question. The answer is and will always be Linux for me :D
However, the actual question is what is the cost of managing a Linux server for you. If you are not familiar with Linux, if you don't have any experience with linux, and if you don't have a company or budget to hire/work with you on this who is a professional linux or PostgreSQL admin, going with Windows is a much more sensible option for you even though it is not the best OS or not the best performing option for PostgreSQL.
 
2:
Is it correct to open a field named client_id for each table, for example the customer table, and use this field in CRUD operations to host the same single customer table for all users?

It depends on the data size and your project's isolation/security requirements. You may also consider partitioning and row level security features of PostgreSQL. There is not a single recipe that is good for all multi-tenancy needs. :)
 
3:
Create a separate table for each User? (result: 5000 users x 200 Tables = 1,000,000 tables)
4:
Create a database per user? (result: 5000 databases)
5:
Is each user a separate schema? (result: 5000 schemas)

Can you share your ideas with me?
Thank you.

Best regards.
Samed YILDIRIM
 

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux