Siddharth Jain <siddhsql@xxxxxxxxx> writes: > When using SSI <https://wiki.postgresql.org/wiki/SSI>, lets say we have two > transactions T1 and T2 and there is a serialization conflict. Postgres > knows when one or the other transaction is doomed to fail > [image: image.png] Please don't use images for things that could perfectly well be expressed as text. They're not quotable, they may not show up in the archives (as this one doesn't), etc etc. email is a text medium, despite Google's attempts to convince you otherwise. > but will not raise serialization error until the transaction commits. This > can cause a massive perf hit because the transactions could be long > running. Why not raise the error early on when the conflict is detected to > avoid wasting CPU and other resources? Can anyone explain this to me? Try src/backend/storage/lmgr/README-SSI, notably this bit: * This SSI implementation makes an effort to choose the transaction to be canceled such that an immediate retry of the transaction will not fail due to conflicts with exactly the same transactions. Pursuant to this goal, no transaction is canceled until one of the other transactions in the set of conflicts which could generate an anomaly has successfully committed. This is conceptually similar to how write conflicts are handled. The main point here is that "at least one of these transactions will have to fail" is very different from "all of these transactions have to fail". If the implementation prematurely forecloses on one of them, it may be that *no* useful work gets done because the others also fail later on for other reasons; moreover, it might be that the victim transaction could have committed after those others failed. Withholding judgment about which one to cancel until something has committed ensures that more than zero work gets completed. Also note that AFAICS we do notice fairly promptly once a transaction has been marked as doomed; it's not the case that we wait till the transaction's own commit to check that. regards, tom lane