On 2 September 2012 22:42, johnlumby <johnlumby@xxxxxxxxxxx> wrote: > On 09/01/12 03:46, Dean Rasheed wrote: >> What you are trying to do cannot be achieved rules, and doing it this >> way with triggers is likely to be messy. I think you need to consider >> a different approach. >> >> It sounds like what you really want is finer-grained control over the >> Hibernate optimistic locking check. One way of doing that would be to >> do the check yourself in a BEFORE UPDATE ROW trigger, with something >> to the effect of: >> >> if new.version != old.version+1: >> raise concurrency error (will cause the entire transaction to be >> rolled back) > > > Thanks Dean. A nice suggestion but my reading of the rules for a BEFORE > row-level trigger > is that it cannot see the NEW tuple : > > "The data change (insertion, update, or deletion) causing the trigger to > fire > is naturally not visible to SQL commands executed in a row-level > BEFORE trigger, > because it hasn't happened yet." > What it's saying is that if you run a SELECT statement inside the BEFORE trigger function, you won't see the new values because the table hasn't been updated yet. However, a BEFORE UPDATE trigger has access to variables called OLD and NEW which are designed specifically for that purpose (you don't need to do a SELECT in the trigger). OLD is the value currently in the table (before the update) and NEW is the value about to be set on the table (modulo the caveat below). So you can implement optimistic locking as follows: 1). SELECT the original data from the table, including the original value of version 2). Work out the new values to set 3). UPDATE the table with the new values, and set version=original_version+1 Then in the BEFORE UPDATE trigger NEW.version will be equal to original_version+1. So if you compare NEW.version with OLD.version+1, you are really comparing OLD.version with original_version, i.e., testing that the value in the table immediately before the update is same as in step (1). Thus it traps the case where another process has modified the row under your feet. By that point, postgresql has a lock on the row about to be modified, so you are guarded against race conditions. > and also under notes to the CREATE TRIGGER statement : > "In a BEFORE trigger, [....] > Note in particular that the NEW row seen by the condition > is the current value, as possibly modified by earlier triggers." > That's just referring to the fact that you may have defined multiple different BEFORE triggers. Each has access to OLD and NEW variables, and is allowed to modify NEW in order to change what ends up in the table. So the final result would depend on the order that the triggers are fired. In practice it is probably best to avoid this by not defining multiple BEFORE triggers for the same operation on the same table. > >> >> Then you could turn off the Hibernate check and add any finer-grained >> control you needed in your trigger function. > > > > Even if your suggestion could somehow work, we really prefer to adopt the > approach > of selectively overriding the hibernate optimistic locking only where > needed, > rather than throwing it out completely and doing that function all in our > own code. > I can understand that. However... > So we are really hoping that there is some functionality somewhere in > postgresql > that allows to intercept a failing UPDATE. With all of the capabilities > of triggers > and rules, it seemed to me that this must be possible. > I don't think that it is possible in postgresql. It might be possible on the Hibernate side. I've never looked to see if that code can be overridden. Regards, Dean -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general