A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;
If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?
thnx
Panagiotis
Thomas Kellerer wrote:
Panagiotis Pediaditis, 14.09.2007 16:45:
Well the problem is I am working on rdf query engine for persistent
RDF data. The data is stored/structured in a specific way in the
database. When i perform updates in parallel, because there are cross
table dependencies, I end up with inconsistencies, For example One
transaction reads to see if there is a resource so as to add a
property where it is a subject. Then an other transaction deletes the
resource after the first has decided that the resource is there but
before it added the property.
Thus it would be helpful for me to avoid the difficult task of
dependency based locking and just lock the whole database.
any ideas?
Hmm. To me this sounds like all those steps should in fact be _one_
transaction and not several transactions.
Thomas
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match