Search Postgresql Archives

Re: Trouble Upgrading Postgres

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Daniel.

Using your idea, I found the records over 400MB, and deleted them in the application. The largest two were in inactive matters, and the third is still available elsewhere if needed. I'll try pg_dump again after work hours and see if it works now. Hopefully it will, now that I've

Adrian, I'll try changing shared_buffers the next time I can restart postgres, at least if deleting the largest records and adding VM hasn't worked.                                                


On Tue, Nov 6, 2018 at 6:47 AM Daniel Verite <daniel@xxxxxxxxxxxxxxxx> wrote:
        Charles Martin wrote:

>  but the second one returned this:
>
> 0 "623140"
> 1 "53"
> 2 "12"
> 3 "10"
> 4 "1"
> 5 "1"
> 7 "1"
> [null] "162"
>
> Not quite sure what that means, but if there is just a small number of
> overly-large records, I might be able to delete them. If I can find them.

The query was:

  SELECT octet_length(docfilecontents)/(1024*1024*100),
               count(*)
   FROM docfile
   GROUP BY octet_length(docfilecontents)/(1024*1024*100);

The results above show that there is one document weighing over 700 MB
(the first column being the multiple of 100MB), one between 500 and
600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
so on.

The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size <some value over 1 billion>.
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic.

A quick and dirty way of getting rid of these contents would be to
nullify them. For instance, nullify anything over 400MB:

UPDATE docfile SET docfilecontents=NULL
  WHERE octet_length(docfilecontents) > 1024*1024*400;

Or a cleaner solution would be to delete them with the application if
that's possible. You may turn the above query into a SELECT that
retrieve the fields of interest (avoid SELECT * because of the huge
column).


Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux