> A SELECT ... FROM "othertable" ... FOR SHARE won't conflict with a > concurrent update on "atable". > > Do I guess right that there was also an UPDATE on the row in > "othertable"? Yes, that was what I meant to convey. Sorry if I was not clear. The point was that an INSERT to "atable" conflicted with an update to "othertable", as a result of the foreign key constraint. > This message will be displayed although the statement that causes > the conflict is actually the UPDATE, perhaps because this was the > first statement to acquire a lock on that row in this transaction. I surmised from the SELECT that it had to be an internally generated SELECT used to enforce referential integrity, because the only statements made in the transaction in question were three INSERT:s. (So in this particular case isolation could simply be dropped to a lower level, but I find this interesting generally because I don't like generating conflicts that are not "real" conflicts in the application domain. And this is an actual implicit conflict at the SQL level, which is even more subtle than the more typical cases like value increments expressed as updates. As I said in this case the isolation be dropped, but in other cases it might trigger a desire to drop the enforced referential integrity instead - which is not good.) [snip] > Transaction 2 now issues an INSERT on "atable". This requires a > RowShareLock on the index row of the index on "othertable" that > is referenced by the foreign key constraint. But the corresponding > index row has changed since the transaction began, hence the error. Yes, this matches my "theory". > Transaction 2 needs the index entry to verify that the foreign > key constraint is satisfied. It cannot perform the check on the old > version of the index row as this might introduce inconsistencies. > Being serializable, it must not use the new index entry. Yes. > One could argue that, as long as the old index entry and the new > index entry have the same values, the transaction could safely proceed. Yes. :) Or alternatively, the fact that it was never updated could be tracked. I guess you might argue that if one, for example, deleted the row and re-created one with another id, that this would in fact break referential integrity. Same for updating the relevant column. But barring implementation reasons, it seems clear that if the row was not dropped and the relevant column was not touched, the ideal implementation would allow the INSERT to complete even in a serializable transaction. > I guess it is for ease of implementation, design or performance reasons > that this is not done. This is what I am wondering. Whether it is done this way due to expecation/standard, or as an implementation side effect. In the latter case it is fixable. > Your idea of "cell level locking" will probably not work with > PostgreSQL: since any change in a data row will cause a new row to be > created, there would be no advantage. I didn't think of that. I can certainly see that update-by-tuple-duplication makes it difficult to implement this case "optimally". Thanks, -- / Peter Schuller PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@xxxxxxxxxxxx>' Key retrieval: Send an E-Mail to getpgpkey@xxxxxxxxx E-Mail: peter.schuller@xxxxxxxxxxxx Web: http://www.scode.org
Attachment:
pgpktHi02pjfp.pgp
Description: PGP signature