On Wed, Oct 26, 2016 at 3:42 AM, Frank Millman <frank@xxxxxxxxxxxx> wrote: > I am designing an inventory application, and I want to ensure > that the stock level of any item cannot go negative. One way to do this is to use only transactions at the SERIALIZABLE transaction isolation level to maintain and query this data. When you do that, you can write the transactions as though each would only ever be run by itself, and if a concurrent transaction would cause incorrect behavior you will get an error with a SQLSTATE starting with "40", and you can retry the transaction from the start. For applications like you describe, this often performs better than approaches which use blocking locks (assuming proper configuration and reasonable indexes). https://www.postgresql.org/docs/current/static/transaction-iso.html Logically, the problem is similar to the overdraft protection example here: https://wiki.postgresql.org/wiki/SSI#Overdraft_Protection Basically, you need some way to catch serialization failure errors and retry the failed transaction from the start, and that frees you from worrying about where race conditions exist and covering each one individually. If you want to use a less strict isolation level, you need to either promote the conflict from read-write to write-write by using SELECT FOR UPDATE or you need to materialize the conflict. The latter could be accomplished by maintaining a total within any transactions modifying the detail (either from triggers or application code), which will cause a write conflict if two transactions try to update the same total at the same time, or by using explicit locking controlled from the application. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general