We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:
ERROR: invalid page header in block 2264419 of relation "pg_largeobject"
pg_dump: The command was: FETCH 1000 IN bloboid
As we seem to have some data corruption issue, the question is how can I
either fix this, or have pg_dump ignore it and continue doing the best
dump it can? That is, I'd like to create a new clean database that has
whatever data I can recover.
Because the large objects are mostly for storing uploaded files (that
have been encrypted, so the DB contents will likely be meaningless), if
we are missing any, it's not too bad, well, no less bad than whatever we
have now.
Thanks,
David
The OS it is running on shows:
cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0.39xen (root@pondo-2) (gcc
version 4.1.2 20071124 (Red Hat 4.1.2-42)) #1 SMP Thu Aug 7 14:58:14 EDT
2008
uname -a
Linux example.com 2.6.18-92.1.10.el5.xs5.0.0.39xen #1 SMP Thu Aug 7
14:58:14 EDT 2008 i686 i686 i386 GNU/Linux
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general