I'm using Postgrseql 7.4.8. In January, I reported a psql bug. The problem was that an INSERT issued through psql would cause a crash. There was no problem with other operations I tried, or with the same INSERT submitted through JDBC. The discussion thread begins here: http://archives.postgresql.org/pgsql-bugs/2006-01/msg00071.php There was no resolution to this problem -- a bad psql build was suggested and I couldn't disprove it. The problem has occurred again, and I've found a buffer overflow in psql that explains it. Here is code from src/bin/psql/common.c, from the PrintQueryResults function: case PGRES_COMMAND_OK: { char buf[10]; success = true; sprintf(buf, "%u", (unsigned int) PQoidValue(results)); In 8.1.5, the sprintf is replaced by an snprintf, resulting in a less serious form of the bug. I believe that we end up in this code after an INSERT is processed through psql. If PQoidValue returns 1000000000 (1 billion) or higher, which requires 10 characters, then we overflow buf due to the terminal zero. Looking at my databases, I find that the problem occurs exactly in the databases with OIDs above 1 billion for newly inserted rows. (Because the psql crash occurs in processing results, the INSERT succeeds, and I can examine the OIDs of the inserted rows.) My January email indicates that we had been loading data for three months, so OIDs could conceivably have gotten as high as 1 billion (if I understand OIDs correctly). The problem occurred again at about the same time -- three months into a test. Changing the 10 to 11, INSERTs through psql no longer cause psql to crash. I have two questions: 1) Is one of the postgresql developers willing to get this fix into the next release? (We're patching our own 7.4.8 build.) 2) If no one else has hit this, then it suggests I might be in uncharted territory with OIDs getting this high. Do I need to review my vacuuming strategy? (I can summarize my vacuuming strategy for anyone interested.) Jack Orenstein