On 6/27/23 9:32 AM, Ben Chobot wrote:We certainly have databases where far more than 100 tables are updated within a 10 second period. Is there a specific concern you have?Thank Ben, not a concern but I'm trying to better understand how common this might be. And I think sharing general statistics about how people use PostgreSQL is a great help to the developers who build and maintain it. One really nice thing about PostgreSQL is that two quick copies of pg_stat_all_tables and you can easily see this sort of info. If you have a database where more than 100 tables are updated within a 10 second period - this seems really uncommon to me - I'm very curious about the workload.
100 tables updates just means possibly complicated schema, not necessarily high volume.
More important is the number of tables updated in a single transaction. Are you updating (really modifying: inserts, updates, deletes) lots of rows in all 100 tables in a single transaction, or are multiple users performing one of 20 separate transactions, each modifying 5 tables? Because that bakes a huge difference.
And honestly, 100 tables in 10 seconds is 10 tables/second. If each gets one insert, that's a laughably slow transaction rate. (Unless of course there's 85 indices per table, and foreign keys don't have supporting indices.)
For example: 1) Is the overall total number of tables for this database in the thousands, 10s of thousands or 100s of thousands? 2) How many CPUs or cores does the server have? 3) Are you using partitions and counting each one? What's the number if you count each partitioned table as a single table? 4) Would you characterize this database as SaaS, ie. many copies of a similar schema?
Why not multiple databases with the same definition?
Or is it one very large schema of many different tables?
--
Born in Arizona, moved to Babylonia.
Born in Arizona, moved to Babylonia.