Justin <zzzzz.graf@xxxxxxxxx> writes: > I now see what is causing this specific issue... > The update and row versions is happening on 2kb chunk at a time, That's > going to make tracking what other clients are doing a difficult task. Yeah, it's somewhat unfortunate that the chunkiness of the underlying data storage becomes visible to clients if they try to do concurrent updates of the same large object. Ideally you'd only get a concurrency failure if you tried to overwrite the same byte(s) that somebody else did, but as it stands, modifying nearby bytes might be enough --- or not, if there's a chunk boundary between. On the whole, though, it's not clear to me why concurrent updates of sections of large objects is a good application design. You probably ought to rethink how you're storing your data. regards, tom lane