Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> If the select returns a lot of data and you haven't enabled cursors (by
> calling setFetchSize), then the entire SQL response will be loaded in
> memory at once, so there could be an out-of-memory condition on the
> client.

I hear you. This is absolutely not the case though. There is no other exception anywhere besides the "An I/O error occured while sending to the backend.". The select query eventually returns something between 10 and 50 rows. Also, the select query runs from another machine than the one that issues the inserts to the data base. I failed to mention in the openings post that simultaneously with this select/insert query for this single 22 GB table, thousands if not tens of thousands other queries are hitting other tables in the same database. There are about 70 other tables, with a combined size of about 40 GB. None of those 70 others tables has a size anywhere near that 22GB of the problematic table. There is never even a single problem of this kind with any of those other tables.

When doing research on the net, it became clear that a lot of these "An I/O error..." exceptions are caused by malfunctioning switches or routers in the network between the application server(s) and the data base. In our case this can hardly be true. As mentioned, a great number of other queries are hitting the database. Some of these are very small (exeuction times of about 10 ms), while others are quite large (hundreds of lines of SQL with over 12 joins and an execution time of several minutes). Not a single one of those results in this I/O error.

> If could also be something else entirely; exceeding your max
> connections, something like that.

We indeed ran into that, but I think more as collateral damage. When this single select query for the 22 GB table is executing and those inserts start to fail, this also starts holding up things. As a results the 6 writes per second start queuing up and requesting more and more connections from our connection pool (DBCP as supplied by Apache Tomcat). We had the maximum of our connection pool set to a too high value and after a while Postgres responded with a message that the connection limit was exceeded. We thereafter lowered the max of our connection pool and didn't see that particular message anymore.

So, it seems likely that "An I/O error occured while sending to the backend." doet not mean "connection limit exceeded", since the latter message is explitely given when this is the case.

> A really good place to start would be to enable tracing on the JDBC
> driver.

That's a good idea indeed. I'll try to enable this and see what it does.

> If the issue is server-side, then you will also want to look at the
> PostgreSQL logs on the server; anything as serious as a backend aborting
> should write an entry in the log.

We studied the PG logs extensively but unfortunately could not really detect anything that could point to the problem there.



Express yourself instantly with MSN Messenger! MSN Messenger

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux