Stefan Froehlich <postgresql@xxxxxxxxxxxxxxxxx> writes: > On Sun, Nov 25, 2012 at 06:57:22PM +0100, Pavel Stehule wrote: >>> | INSERT INTO a (id, fkid, displayorder, name, description, internal, mimetype, mimedata, filesize) VALUES (73,6,5, E'Seefeld.rar', E'Seefeld',0, E'application/octet-stream',decode('5261...0700', 'hex'),311484587); >> Attention - BYTEA is not BLOB and although physical limit is 1G - real >> limit is significantly less - depends on RAM - 7years ago we found so >> practical limit is about 20MB. > Oops, that's not too much. In the docs I've seen a 4-byte length > descriptor, thus expected a size limit of 4 GB and felt quit safe > with a maximum size of 300 MB. I replicated this case and found that the immediate cause of the problem is addlit() in the scanner, which is trying to double its work buffer size until it's larger than the literal string --- so even though the string is "only" 600MB, it tries to make a 1GB buffer. We could fix that particular case but there are doubtless others. It's not really a good idea to be pushing query texts of hundreds of megabytes through the system. Quite aside from whether you'd hit the 1GB-per-alloc hard limit, the system is likely to make quite a few copies of any constant value in the process of parsing/planning a query. You might have better luck if you treated the large value as an out-of-line parameter instead of a literal constant. Aside from dodging the costs of a very large query string and a large Const value, you could send the parameter value in binary and avoid hex-to-binary conversion costs. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general