I have a multi-tenant database that I'm migrating from SQL Server to PostgreSQL 9.6.1. I read the recent articles about the potential write amplification issue in Postgres. I have one particular table that has 14 columns, a primary key, five foreign keys, and eight indexes. We have a little over a thousand devices (this number will increase over time) on the Internet that will insert a row into this table and then proceed to update two columns in that row about once a minute for the next two hours. The two columns are NOT NULL and are not FK or indexed columns. I've thought about moving them to a one-to-one related table. Any thoughts on if this is a wise move or if I'm making a mountain out of a mole hill? It looks like this scenario would be covered by the Heap-Only-Tuple update but with over a hundred updates to the same row and over a thousand different rows being updated at a time, will I reap the benefits?
With a reasonable fill-factor on the table you probably would be OK - but I'm partial to separating out the static and dynamic data into separate tables if the rest of the model and intended applications support it. The main concern is how many queries do you have with a WHERE clause that includes fields from both sets? Cross-table statistical estimates are problematic but if you don't have to be concerned about them it would be conceptually cleaner to setup a one-to-one here.
David J.