On Fri, Apr 15, 2011 at 10:55 AM, Paul Millar <paul.millar@xxxxxxx> wrote: > Hi all, > > I've a question regarding unique constraints, which I've tried to describe in > general terms, to keep things simple. > > I've working on an application that, amongst other things, may add a row to a > table. This table has a primary key defined over two (of the three) fields, > which forces the combined value to be unique. > > An end-user of this system can cause the application to add a row to a table, > based on data supplied by that end-user (a create-like command). It can > happen that end-users repeat themselves: rerunning the same activity with the > same data. Logically, repeating the activity doesn't make sense; the > application should fail the second (and all subsequent) attempts with a > meaningful error message. > > When writing the application, there was a deliberate design decision: rather > than a read-modify-write cycle, with the corresponding overhead of locking and > the resulting serialisation, the software simply lets the INSERT fail (due to > the primary-key uniqueness constraint). If this happens, the transaction is > rolled back and an error message returned. > > The software can identify whether the problem is due to an end-user repeating > an earlier action by looking at the class code from the SQL error ("23" == > Constraint Violation). This allows us to return the correct error. > > This works fine: the correct error message is reported and the system behaves > as it should. There's one problem: if a user repeats their activity then > PostgreSQL logs the corresponding constraint violation: > > ERROR: duplicate key value violates unique constraint [..] > > The log files may contain many such messages, depending on the usage-pattern > of the end-user. Including all these messages in the log file is distracting. > > The question is: can we suppress the logging of these message .. but allow > other error messages to be logged normally? Consider using INSERT...SELECT WHERE NOT EXISTS... from the app so as to not insert row if the key is already in the table, and checking the number of rows affected (or use 'returning') to see if the insert hit. You can still see duplicate key errors if you do this (race condition), but they should be much rarer. You could also write a function to wrap the insert and trap the error, but that's pretty hacky imo. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general