On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii <ishii@xxxxxxxxxxxxxx> wrote: >> i have sql file (it's size are 1GB ) >> when i execute it then the String is 987098801 bytr too long for encoding >> conversion error occured . >> pls give me solution about > > You hit the upper limit of internal memory allocation limit in > PostgreSQL. IMO, there's no way to avoid the error except you use > client encoding identical to backend. We recently had a customer who suffered a failed in pg_dump because the quadruple-allocation required by COPY OUT for an encoding conversion exceeded allocatable memory. I wonder whether it would be possible to rearrange things so that we can do a "streaming" encoding conversion. That is, if we have a large datum that we're trying to send back to the client, could we perhaps chop off the first 50MB or so, do the encoding on that amount of data, send the data to the client, lather, rinse, repeat? Your recent work to increase the maximum possible size of large objects (for which I thank you) seems like it could make these sorts of issues more common. As objects get larger, I don't think we can go on assuming that it's OK for peak memory utilization to keep hitting 5x or more. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance