Thanks Laurenz and Guillaume, got clarity on how it works.
Ran few tests and you are right, size restriction applies for pg_catalog.pg_largeobject(as it is the common place where BLOB data is stored) in case of TAR format.
On Mon, Sep 21, 2015 at 2:47 PM, Albe Laurenz <laurenz.albe@xxxxxxxxxx> wrote:
girish R G peetle wrote:
> Got it. Thanks Laurenz.
> One thing is little confusing, if large objects don't belong to a table, then how does restriction of
> 8GB table size for TAR format is applicable if BLOB data is involved.
> Is it (Regular Table Data size) + (BLOB data held by OID stored in the table) ?
What is the command you use to dump table + large objects?
I guess that the large objects make up more than 8 GB and are dumped as a
single file. 8 GB is the size limit of a single file in a TAR archive.
> If just OID of a large object is copied to a different table say 'Table2'. Then for 'Table2' as well
> should I calculate the total size as (Regular Table Data size) + (BLOB data held by OID stored in the
> table) ?
That's exactly the problem: large objects don't technically belong to the table
which references them. If you reference a large object from more than one
table, there's no good way of defining to which it belongs.
But that's irrelevant to the problem of files in a dump exceeding the limit of 8 GB,
isn't it? If you sump with the --blobs option, all large objects in the whole database
will be dumped, no matter if they are referenced from a table or not.
Yours,
Laurenz Albe