On 12/22/2014 03:53 PM, Robert DiFalco wrote:
This may fall into the category of over-optimization but I've become
curious.
I have a user table with about 14 columns that are all 1:1 data - so
they can't be normalized.
When I insert a row all columns need to be set. But when I update, I
sometimes only update 1-2 columns at a time. Does the number of
columns impact update speed?
For example:
UPDATE users SET email = ? WHERE id = ?;
I can easily break this up into logical tables like user_profile,
user_credential, user_contact_info, user_summary, etc with each table
only having 1-4 columns. But with the multiple tables I would often be
joining them to bring back a collection of columns.
I know I'm over thinking this but I'm curious of what the performance
trade offs are for breaking up a table into smaller logically grouped
tables.
An update rewrites the whole row, not just the updated columns.
I think you are overthinking it.
cheers
andrew
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance