Search Postgresql Archives

Re: How to make transaction delete see data from a just completed concurrent transaction?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



George Woodring <george.woodring@xxxxxxxxxx> writes:
> We have been using the model of updating certain data in a table of
> begin;
> delete from foo where bar='myfoo';
> insert into foo all of the correct data for myfoo;
> commit;

> Our thinking was that if you had two running at close to the same time, the
> first transaction would finish and then the second one would run making the
> table consistent.  However this is not the case.  The second transaction is
> not deleting anything and we are getting double the inserted data.  Our
> guess is that the second transaction is trying to delete the data from the
> start of the transaction, not from when the lock on the table is released,
> and that data is already gone from the first transaction.

I think a closer statement is that the "delete" doesn't see any
uncommitted rows that might be in process of insertion by another
transaction.

I think that you could handle this by using serializable transactions,
which will fail if any concurrent transaction modifies data they
depend on.  You'd then need to have a retry loop for serialization
failures.

Another approach is to try to design the table such that a uniqueness or
exclusion constraint disallows having multiple sets of data for the same
key.  You'd still get failures and have to be willing to retry, but the
cause of the failure would be much more narrowly defined than it is
with a serialization failure.

			regards, tom lane





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux