Search Postgresql Archives

Re: Avg/max size of these JSON docs in Postgres

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/12/21 21:21, Michael Lewis wrote:
On Tue, Oct 12, 2021 at 12:51 PM Simon Riggs <simon.riggs@xxxxxxxxxxxxxxxx <mailto:simon.riggs@xxxxxxxxxxxxxxxx>> wrote:

    On Tue, 12 Oct 2021 at 18:53, Michael Lewis <mlewis@xxxxxxxxxxx
    <mailto:mlewis@xxxxxxxxxxx>> wrote:
     >
     > On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs
    <simon.riggs@xxxxxxxxxxxxxxxx <mailto:simon.riggs@xxxxxxxxxxxxxxxx>>
    wrote:
     >>
     >> On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya
     >> <ram.maurya@xxxxxxxxxxxxxxxxxxxx
    <mailto:ram.maurya@xxxxxxxxxxxxxxxxxxxx>> wrote:
     >>
     >> > Confirm what is Avg/max size of these JSON docs in Postgres.
     >>
     >> JSON and JSONB datatypes can both be max 1GB in size.
     >
     > That is per row.

    No, that is per column.


Yes, sorry. My attempt at clarification only muddled things. Each column within each row can be up to 1GB in size is how I understand the limit.

But you're kinda right, actually. It very much is per-tuple, because in various places we form tuples with all the data inline. Consider for example this:

  create table t (a text, b text);
  alter table t alter column a set storage extended;
  alter table t alter column b set storage extended;

  insert into t select repeat(md5('a'), 512*1024*1024/32),
                       repeat(md5('b'), 512*1024*1024/32);
  ERROR:  invalid memory alloc request size 1073741880

Clearly, both values are only 512MB (no compression). Yet it fails, simply because tts_virtual_copy_heap_tuple calls heap_form_tuple to form a tuple with all values and also the tuple header.

But this succeeds, because the values are 64B shorter, leaving enough space for the tuple header etc.

  insert into t select repeat(md5('a'), 512*1024*1024/32 - 2),
                       repeat(md5('b'), 512*1024*1024/32);

And you can even select the data:

  select * from t;

You can even do this:

  update t set a = repeat(md5('a'), 512*1024*1024/32);

which works, so now you have a row with two 512MB values. But then you'll face this:

  select * from t;
  ERROR:  out of memory
  DETAIL:  Cannot enlarge string buffer containing 536870922 bytes by
           536870912 more bytes.

because printtup() builds a huge string with all the data (and if the columns had compression, this would be decompressed, because it goes to the client).

So yeah, there's an explicit 1GB limit per value, but having rows close to the 1GB limit is going to cause all sorts of unpredictable and rather painful issues :-(


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux