Search Postgresql Archives

Re: deadlock debug methodology question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In response to "antiochus antiochus" <antiochus.usa@xxxxxxxxx>:

> On Thu, May 22, 2008 at 4:20 PM, Bill Moran <wmoran@xxxxxxxxxxxxxxxxxxxxxxx>
> wrote:
> >
> > In response to "antiochus antiochus" <antiochus.usa@xxxxxxxxx>:
> > >
> > > On Thu, May 22, 2008 at 2:57 PM, Bill Moran <
> > wmoran@xxxxxxxxxxxxxxxxxxxxxxx>
> > > wrote:
> > >
> > > > In response to "antiochus antiochus" <antiochus.usa@xxxxxxxxx>:
> > > > >
> > > > > I have a deadlock situation, two transactions waiting on each other
> > to
> > > > > complete. Based on the details below, would anyone have
> > recommendations
> > > > for
> > > > > me, please?
> > > >
> > > > I have a theory on deadlocks, and that theory is that it's damn near
> > > > impossible to track them all down, so your best bet is to wrap all
> > > > SQL calls in a function that detects deadlock and sleep/retries.

[snip]
 
> For the past fews days, I've been making the application work using your
> second suggestion (what I meant by "some form of explicit locking"). It does
> works.
> 
> However, it is the first explicit lock used in this growing application and
> I was trying to not have any. The rows of this table are, from a business
> logic perspective, partitioned into distinct sets (think one set per
> customer) and a transaction would never involve rows across these sets. So
> in the original design, concurrency is great across customer requests.
> By using the table-level lock, writing transactions targeting these
> non-overlapping per-customer sets end up waiting on each other...
> 
> If that is the best we can do, that is it. However, I do not understand why
> acquiring row-level locks in consistent order does not seem to be enough, or
> if that is so, why my methodology to enforce this ordering is flawed. Note
> that I also use foreign keys and I am aware of the fact that constraints are
> verified (and lock acquired) after row insertion and this is taken into
> account as well. I could have a "per-customer" lock, to improve concurrency
> across customers while avoiding deadlocks, or use seralizable transactions,
> but I was wondering if a methodology to accomplish what I was originally
> trying to do has been documented anywhere.

I seem to remember us determining that you were _not_ acquiring row-level
locks in a consistent order.  Did I miss something?  AFAIK, if you can do
so in a consistent order, you will never hit deadlocks.

Also, I stand by my original statement that deadlocks are nearly impossible
to 100% avoid and they're easy to recover from, so it makes sense to
program your application to detect them and retry appropriately.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@xxxxxxxxxxxxxxxxxxxxxxx
Phone: 412-422-3463x4023


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux