Jim C. Nasby wrote: > > It can cause a race if another process could be performing those same > inserts or updates at the same time. There are inserts and updates running all of the time, but never the same data. I'm not sure how I can get around this since the queries are coming from my radius system which is not able to queue this stuff up because it waits for a successful query before returning an OK packet back to the client. > > I know the UPDATE case can certainly cause a race. 2 connections try to > update, both hit NOT FOUND, both try to insert... only one will get to > commit. Why is that? Doesn't the first update lock the row causing the second one to wait, then the second one stomps on the row allowing both to commit? I must be confused.... > > I think that the UNIQUE_VIOLATION case should be safe, since a second > inserter should block if there's another insert that's waiting to > commit. Are you saying that inserts inside of an EXCEPTION block, but normal inserts don't? > > DELETEs are something else to think about for both cases. I only do one delete and that is every night when I move the data to the primary table and remove that days worth of data from the tmp table. This is done at non-peak times with a vacuum, so I think I'm good here. > > If you're certain that only one process will be performing DML on those > tables at any given time, then what you have is safe. But if that's the > case, I'm thinking you should be able to group things into chunks, which > should be more efficient. Yea, I wish I could, but I really need to do one at a time because of how radius waits for a successful query before telling the access server all is well. If the query fails, the access server won't get the 'OK' packet and will send the data to the secondary radius system where it gets queued. Do you know of a way to see what is going on with the locking system other than "select * from pg_locks"? I can't ever seem to catch the system when queries start to lag. Thanks again, schu