Re: How to get more than 2^32 BLOBs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le 08/04/2020 à 12:12, Donato Marrazzo a écrit :
> Hi Laurenz,
> thank you for your reply.
> Are you aware of any performance drawback?

We had a customer with millions of small Large Objects, in part because
his application forgot to unlink them.

As a consequence, pg_dump was using huge amounts of memory, making a
backup impossible. It was with PG 9.5, I don't think the situation
improved since.

> Il giorno mer 8 apr 2020 alle ore 12:06 Laurenz Albe
> <laurenz.albe@xxxxxxxxxxx <mailto:laurenz.albe@xxxxxxxxxxx>> ha scritto:
...
>     > I'm working on a use case were there are many tables with blobs
>     (on average not so large 32KB).
>     > I foresee that in 2-3 years time frame, the limit of overall blobs
>     will be breached: more than 2^32 blobs.
>     > - Is there a way to change the OID limit?
>     > - Should we switch to a bytea implementation?
>     > - Are there any drawback of bytea except the maximum space?

>     Don't use large objects.  They are only useful if
>     1) you have files larger than 1GB or
>     2) you need to stream writes
> 
>     There are no such limitations if you use the "bytea" data type, and
>     it is much simpler to handle at the same time.


+1


-- 
Christophe Courtois
Consultant Dalibo
https://dalibo.com/





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux