I now see what is causing this specific issue...
The update and row versions is happening on 2kb chunk at a time, That's going to make tracking what other clients are doing a difficult task.
All the clients would have to have some means to notify all the other clients that an update occurred in this chunk, which could cause total reload of the data if the update spilled into adjoining rows,
The notifications and re-fetching of data to keep the clients in sync is going to make this a Network Chatty app.
Maybe adding a bit to the documentation stating "row versions occurs every X chunks"
On Wed, Dec 18, 2019 at 11:12 AM Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Justin <zzzzz.graf@xxxxxxxxx> writes:
> I have a question reading through this email chain. Does Large Objects
> table using these functions work like normal MVCC where there can be two
> versions of a large object in pg_largeobject .
Yes, otherwise you could never roll back a transaction that'd modified
a large object.
> My gut says no as
> moving/copying potentially 4 TB of data would kill any IO.
Well, it's done on a per-chunk basis (normally about 2K per chunk),
so you won't do that much I/O unless you're changing all of a 4TB
object.
regards, tom lane