On Wednesday, October 28, 2015 1:52 PM, Kevin Grittner <kgrittn@xxxxxxxxx> wrote: > But if we already have a write > lock on the tuple (through the xmax column), then an update or > delete of the row by another transaction would cause a write > conflict and one of the transactions will surely be rolled back. > An SIReadLock thus adds no value, so we omit it. Oh, I see that your other case also had a primary key on the column used for selecting the row to update; you're probably wondering why the optimization didn't kick in for that case. From the locks it appears that in the first story a sequential table scan was used, causing a relation-level SIReadLock. To allow use of the primary key (and thus less overhead in the SSI code), you may need to analyze the table or change cost factors to encourage an index scan rather than a sequential scan. Increasing cpu_tuple_cost and effective_cache_size, and decreasing random_page_cost might nudge things in that direction. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general