Search Postgresql Archives

Re: database-level lockdown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/07/2015 06:44 AM, Filipe Pina wrote:
On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys <haramrae@xxxxxxxxx> wrote:
On 7 July 2015 at 12:55, Filipe Pina <filipe.pina@xxxxxxxxxxxxx> wrote:

    On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
    <adrian.klaver@xxxxxxxxxxx> wrote:

        Still not sure what is you are trying to accomplish. Is it
        really necessary that every transaction be serialized? Or to
        put it another way, why are you running in serializable by
        default? Or yet another way, what is the problem you are
        trying to solve with serialized transactions?

    Exactly, that's the twist I've decided after some tests yesterday
    (the "lock all tables on last try" degraded performance in an
    obscene way): giving up on serializable by default. I wanted to
    use serializable so developers wouldn't have to worry about
    properly using locks, but implementing this "serialization
    failure" handler is becoming a major headache...

What Adrian was trying to get out of you is why you think you need
those locks. You're working with an RDBMS, it does the job of keeping
data integrity in a multi-user environment already. You can trust it
do do that well. So unless you're doing something really special, you
shouldn't need to lock anything. Among the special cases where you do
need locks are things like needing a gapless sequence (invoice numbers
come to mind); things like that. So once again, what do you need those
locks for? P.S. Please don't top-post on this list. I had to edit your
message to fix the flow of the conversation.
--
If you can't see the forest for the trees, Cut the trees and you'll
see there is no forest.

It probably wasn't clear but the conversation did start with exposing my
issue, I'll try to rephrase.

Serializable keeps data integrity of the database, that's true. But it
does that by "refusing" transactions that would compromise it (with
error 40001).

I need to make sure no transactions fail due to such errors, so I made
an handler to retry transactions if they fail due to 40001.
But I can't leave it retrying forever, so I have this (configurable)
limit of 5 retries.
5 retries might be enough in some case, but it might not be enough in
others.

So how do I make sure the last try would not fail with serialization error?
I could only come up with the idea of using LOCK database (on that last
try) but as it is not available in PG, I went for "lock all tables".

Does it make sense now?
I was probably having the wrong approach to the problem anyway, I've
went with "read committed" mode now and normal locking (SELECT .. FOR
UPDATE on the required spots) and it works better..

You also mentioned that this for a Django application, have you looked at its new(since 1.6) transaction management?:

https://docs.djangoproject.com/en/1.8/topics/db/transactions/

It allows you fairly fine grained control over transactions, including access to savepoints.


I just come from a GTM background (and all the other developers in the
project) and we're used to the TP RESTART concept where, in the final
retry, GTM locks the database region(s) that were used by that
transaction (not the whole DB, just the regions).

Thanks and sorry for the top-posting, not really used to mailing lists
and proper posting styles..


--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux