Tom Lane wrote: > seiliki@xxxxxxxxxxxxx writes: >> I am trying to implement a mechanism that prohibits the last row >> of a data set from being deleted. > The reason that doesn't work is you marked it "stable", so it > always sees the starting state of the outer query. > > Mind you, even with that oversight fixed, this approach will do > little to save you from concurrent-update situations. That is, > transaction A could delete some of the rows with c1=1, and > transaction B could concurrently delete the rest, and neither > transaction will see a reason why it shouldn't commit. Right, that is a form of write skew, where each transaction is writing to the database (in this case with deletes) based on reading a portion of the database written to by the other transaction. This will be handled automatically if all transactions are using transaction isolation level SERIALIZABLE. Otherwise you need to materialize the conflict (for example, by adding a separate table with one row per c1 value, and a count of matching t1 rows, maintained by triggers) or promote the conflict (changing the non-blocking read-write conflicts into write-write conflicts, by updating all the rows with the same c1 value which you are not deleting). Or you could use table-level blocking with LOCK TABLE statements, or develop some scheme to use advisory locks. These pages might help: http://www.postgresql.org/docs/9.2/interactive/mvcc.html http://wiki.postgresql.org/wiki/SSI -Kevin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general