I have a Django+PostgreSQL.
For data integrity pg is setup for serializable transactions, so I'm retrying the transaction (in a generic class) as:
def _execute_wrapper(pair):
obj, item = pair
from time import sleep
last_error = None
for tt in xrange(obj._max_retry):
try:
obj.execute(item)
return (None, tt)
except (utils.OperationalError) as exc:
last_error = exc
obj.handle_error(exc)
sleep(obj._retry_wait)
except Exception as exc:
obj.handle_error(exc)
raise exc
return (last_error, tt)
Default _max_retry
is 5
and _retry_wait
is 0.1s
.
It will try 5 times to execute each instruction (in case of OperationError) and in the last one it will raise the last error it received, aborting.
Now my problem is that aborting for the last try (on a restartable error - OperationalError code 40001) is not an option... It simply needs to get through, locking whatever other processes and queries it needs.
So I've thought about use a generic lock (at django level, not PG) that every transaction would check, but that will be a problem once the Django application runs in multiple instances (with a load balancer)...
So, I've thought of pg_advisory_lock()
to have the lock on the database level, but then I'd need some way to make sure Django checks that lock in every database request (a select, an update, function call, etc)..
It would need to get the lock and release it immediately (not the try
version of the lock that releases only at the end of the transaction). Then, the retry
wrapper, on the 5th try will set the lock (not releasing it immediately)..
I'm open to any option, it's just that a transaction going on its 5th try of an OperationalError really has to go in, no matter if it has to fully lockdown the database to any other transaction..
Thanks,
Filipe