Search Postgresql Archives

Re: pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/24/17 4:18 PM, Tom Lane wrote:
David Wall <d.wall@xxxxxxxxxxxx> writes:
We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:
8.3.3?
Yes, it's old.

cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0.39xen (root@pondo-2) (gcc
version 4.1.2 20071124 (Red Hat 4.1.2-42)) #1 SMP Thu Aug 7 14:58:14 EDT
2008
Egad.  I take it this server has been entirely unmaintained for ~ 8 years.
Indeed! We are just the software vendor, hence our surprise too. They didn't even know their backups were failing due to this error.


ERROR:  invalid page header in block 2264419 of relation "pg_largeobject"
pg_dump: The command was: FETCH 1000 IN bloboid
As we seem to have some data corruption issue, the question is how can I
either fix this, or have pg_dump ignore it and continue doing the best
dump it can?  That is, I'd like to create a new clean database that has
whatever data I can recover.
Setting zero_damaged_pages would be a brute-force answer, but bear in mind
that that's irreversible and it's hard to predict how much you'll lose.
If possible, I'd take a physical backup (e.g. with tar) of the entire
$PGDATA directory, preferably with the server stopped, before you do
that.  Then you'll at least know you can get back to where you are.

I hope this is being done as part of migration to more up-to-date
software.
Yes, this was discovered as part of a migration to all new servers. We have just put into place PG 9.3.4 and was looking to load it when their last pg_dump was restored and got an error because it's an incomplete dump. We then when to run our own pg_dump when we found that it crashed with the invalid page header.

Good idea on the physical backup first, while the system is stopped.

They do have a slave DB running via WAL shipping. Would that likely help us in any way? Because the DBs are big (they have two at 191GB and 127GB), it takes a fair bit of time to do backups, transfers and restores. I'm trying to find options as it likely means downtime for them that they are not expecting (yet).

Thanks for your help!


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux