On 16/09/2016 12:00, John R Pierce wrote:
On 9/16/2016 3:46 AM, Chris Withers wrote:
when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting a new row with
a start ts running on from that.
thats expensive, as it has to reindex that row. and range indexes are
more expensive than timestamp indexes
modifiyng the primary key is kind of a violation of one of the basic
rules of relational databases as it means the row can't be referenced by
another table.
Right, but these rows have no natural primary key. Would it help if I
just added an auto-incrementing integer key? Would that make a positive
difference or would it just be a wasted column?
I expect the expensive one is the constraint that ensures no periods
overlap for the given key. I'm not sure how that can be done short of
a full scan for each update/insert.
Indeed, I wonder if making the constraint deferrable might help for the
bulk case?
it might actually perform better
if you write the index with the key first as presumably the key is
invariant ?
You mean:
PRIMARY KEY, btree (key1, key2, period)
as opposed to
PRIMARY KEY, btree (period, key)
Interesting, I'd assumed postgres would optimise that under the covers...
Chris
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general