Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2008-08-26 at 18:44 +0200, henk de wit wrote:
> Hi,
> 
> We're currently having a problem with queries on a medium sized table. This table is 22GB in size (via select pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring the total size of the table to 35 GB (measured with pg_total_relation_size).
> 
> On this table we're inserting records with a relatively low frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian Linux. The machine is completely devoted to PG, nothing else runs on the box.
> 
> Lately we're getting a lot of exceptions from the Java process that does these inserts: "An I/O error occured while sending to the backend." No other information is provided with this exception (besides the stack trace of course). The pattern is that for about a minute, almost every insert to this 22 GB table results in this exception. After this minute everything is suddenly fine and PG happily accepts all inserts again. We tried to nail the problem down, and it seems that every time this happens, a select query on this same table is in progress. This select query starts right before the insert problems begin and most often right after this select query finishes executing, inserts are fine again. Sometimes though inserts only fail in the middle of the execution of this select query. E.g. if the select query starts at 12:00 and ends at 12:03, inserts fail from 12:01 to 12:02.
> 
> We have spend a lot of hours in getting to the bottom of this, but our ideas for fixing this problem are more or less exhausted at the moment.
> 
> I wonder if anyone recognizes this problem and could give some pointers to stuff that we could investigate next. 
> 
> Thanks a lot in advance.

If the select returns a lot of data and you haven't enabled cursors (by
calling setFetchSize), then the entire SQL response will be loaded in
memory at once, so there could be an out-of-memory condition on the
client.

Or if the select uses sorts and PG thinks it has access to more sort
memory than is actually available on the system (due to ulimits,
physical memory restrictions, etc) then you could run into problems that
look like out-of-memory errors on the server.

If could also be something else entirely; exceeding your max
connections, something like that.

A really good place to start would be to enable tracing on the JDBC
driver.  Look at the docs for the PostgreSQL JDBC driver for how to
enable logging; that should give you a much better picture of exactly
where and what is failing.

If the issue is server-side, then you will also want to look at the
PostgreSQL logs on the server; anything as serious as a backend aborting
should write an entry in the log.

-- Mark


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux