"Donald Fraser" <postgres@xxxxxxxxxxxxxxx> writes: > Log messages: > <1Tmenteshvili 10709 2006-08-24 17:48:19 BST 0> ERROR: invalid = > message format > < 3670 2006-08-24 17:48:19 BST > LOG: server process (PID 10709) was = > terminated by signal 11 > At the same time, an engineer, who thought nobody was in the = > building, was working on the network and changing a network switch. I tried to reproduce this by using gdb to force the "invalid message format" failure --- that is, deliberately changing one of the values pq_getmsgend compares. No luck, the backend issued the error and continued just as it should. My best guess now that the crash was not directly related to that error, but was caused by insufficiently robust processing of whatever garbage was received following that. However, with no clue what that garbage might have looked like, there's too many possible code paths to try to chase through with no leads. Anyone care to do some random-data stress testing, ie, connect to a backend and squirt random data at it to see if you can make it crash rather than just complain and disconnect? Be prepared to show a backend stack trace and the exact data sent if you succeed. regards, tom lane