On Fri, 2022-11-11 at 17:09 +0000, Alexis Zapata wrote:In postgresql 13.5 I have a table (size 3.1 GB) and in this table occurs near to 200 updates per second, after 2 days the size table is 7 GB and bloat grow to 45% and the query operations are degraded. vacuum runs every 5 seconds over this. but the bloat growth continues, to solve the problem quickly, we have made a replica of the table with a trigger, then a copy of the data and in a transaction we rename the table, but it would not be the best solution. Some suggestion about stop this size increase or parameter to setting up?You'd be most happy with HOT updates. Make sure that there is no index on any of the columns you update, and change the table to have a "fillfactor" less than 100. Then you can get HOT updates which don't require VACUUM for cleaning up. https://www.cybertec-postgresql.com/en/hot-updates-in-postgresql-for-better-performance/
To clarify: do HOT updates automatically happen if there's enough space on the page AND you don't update an indexed field (which should be minimized anyway)?
If that is true, what happens if someone then updates an indexed field? Does PG keep doing HOT updates on the other tuples, or does it stop HOT updates altogether until you recluster or full vacuum it?
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.