Thanks for the info, Richard.
I didn't think that it was a slony issue per se, but that a note should be added to the slony docs warning to recycle connections after making substantive changes to the schema.
You're right, we use both (java) prepared statements and pl/pgsql functions.
The data loss aspect is not so clear cut (for us). We definitely got records that failed to insert (missing sequence numbers in tables) while others did insert correctly into the same tables (using the same pl/pgsql functions). So we can't figure out what the pattern is.
Although we do have our connection pool recycle idle connections - it could be that things worked when a new recycled connection was used by the web app, and failed when one of the "old/original" connections was used. This is probably what masked the errors for us...
That's almost certainly the case. I'm luck in that I've always been able to have some downtime for schema changes.
It would be great if postgres could "recompile" pl/pgsql functions whenever it found a missing object referenced within the function - chances are that it would compile cleanly (in this case) and then could be executed without error.
something along the lines of
execute function OK -> return result ERROR - OID -> recompile function, and re-execute OK -> return result ERROR - OID -> report error
This would help get rid of the temp table in functions work around having to use an execute statement.
Hmm - not sure you could do this without a savepoint to catch the error. However, it might be possible to add track dependencies with the function (as with views). Then you'd have to issue a CASCADE to alter the table.
-- Richard Huxton Archonet Ltd
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match