On 2023-Nov-16, Nick DeCoursin wrote: > In my opinion, it would be better for merge to offer the functionality to > simply ignore the rows that cause unique violation exceptions instead of > tanking the whole query. "ignore" may not be what you want, though. Perhaps the fact that insert (coming from the NOT MATCHED clause) fails (== conflicts with a tuple concurrently inserted in an unique or exclusion constraint) should transform the row operation into a MATCHED case, so it'd fire the other clauses in the overall MERGE operation. Then you could add a WHEN MATCHED DO NOTHING case which does the ignoring that you want; or just let them be handled by WHEN MATCHED UPDATE or whatever. But you may need some way to distinguish rows that appeared concurrently from rows that were there all along. In regards to the SQL standard, I hope what you're saying is merely not documented by them. If it indeed isn't, it may be possible to get them to accept some new behavior, and then I'm sure we'd consider implementing it. If your suggestion goes against what they already have, I'm afraid you'd be doomed. So the next question is, how do other implementations handle this case you're talking about? SQL Server, DB2 and Oracle being the relevant ones. Assuming the idea is good and there are no conflicts, then maybe it's just lack of round tuits. Happen to have some? I vaguely recall thinking about this, and noticing that implementing something of this sort would require messing around with the ExecInsert interface. It'd probably require splitting it in pieces, similar to how ExecUpdate was split. There are some comments in the code about possible "live-locks" where merge would be eternally confused between inserting a new row which it then wants to delete; or something like that. For sure we would need to understand the concurrent behavior of this new feature very clearly. An interesting point is that our inserts *wait* to see whether the concurrent insertion commits or aborts, when a unique constraint is involved. I'm not sure you want to have MERGE blocking on concurrent inserts. This is all assuming READ COMMITTED semantics; on REPEATABLE READ or higher, I think you're just screwed, because of course MERGE is not going to get a snapshot that sees the rows inserted by transactions that started after. You'd need to explore all this very carefully. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/