On Thu, 27 Nov 2014 15:07:49 +1100 Sanjaya Vithanagama <svithanagama@xxxxxxxxx> wrote: > On Wed, Nov 26, 2014 at 11:47 AM, Bill Moran <wmoran@xxxxxxxxxxxxxxxxx> > wrote: > > > On Wed, 26 Nov 2014 10:41:56 +1100 > > Sanjaya Vithanagama <svithanagama@xxxxxxxxx> wrote: > > > > > > > * How frequently do deadlocks occur? > > > > > > We are seeing deadlocks about 2-3 times per day in the production server. > > > To reproduce the problem easily we've written a simple Java class with > > > multiple threads calling to the stored procedures running the above > > queries > > > inside a loop. This way we can easily recreate a scenario that happens in > > > the production. > > > > Don't overcomplicate your solution. Adjust your code to detect the deadlock > > and replay the transaction when it happens. At 2-3 deadlocks per day, it's > > difficult to justify any other solution (as any other solution would be > > more time-consuming to implement, AND would interfere with performance). > > When you say replay the transaction, I believe that is to catch the > exception inside the stored procedure? We've considered that option at one > state but, the problem with that is we don't have enough context > information at the stored procedure where this deadlock occurs. Why not catch it in the application calling the stored procedure? I don't understand how you could not have enough context to run the command you were just trying to run. Can you elaborate on what you mean by that? > > I've worked with a number of write-heavy applications that experienced > > deadlocks, some of them on the order of hundreds of deadlocks per day. > > In some cases, you can adjust the queries to reduce the incidence of > > deadlocks, or eliminate the possibility of deadlocks completely. The > > situation that you describe is not one of those cases, as the planner > > can choose to lock rows in whatever order it thinks it most efficient > > and you don't have direct control over that. > > > > The performance hit you'll take 2-3 times a day when a statement has to > > be replayed due to deadlock will hardly be noticed (although a statement > > that takes 50 seconds will cause raised eyebrows if it runs 2x) but that > > will only happen 2-3 times a day, and the solution I'm proposing won't > > have any performance impact on the other 13000000 queries per day that > > don't deadlock. > > > > 2-3 deadlocks per day is normal operation for a heavily contented table, > > in my experience. > > Given that we have no control over how Postgres performs delete and update > operations, the only other possibility seems to be to partition this table > by id_A (so that the individual tables will never be deadlocked). But that > seems to be a too extreme end option at this stage. That would be overcomplicating the solution, and almost certainly won't work anyway. If you're getting deadlocks, it's because two processes are trying to modify the same rows. Even if you partition, those same rows will be on the same partition, so you'll still deadlock. -- Bill Moran I need your help to succeed: http://gamesbybill.com -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general