Re: Corrupted data, best course of repair?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sean Chittenden <sean@xxxxxxxxxx> writes:
> Today I was charged with fixing a DB, and upon looking into it, I
> discovered that there was gigs upon gigs of the errors from below.
> Has anyone seen this or, more interesting to me, recovered from this
> kind of a failure mode?

> Here's a bit of background.  The system is a good system with verified
> disks/FS, etc., but it's a busy database and (here's what's bugging
> me): it's using pg_autovacuum *AND* is a system that gets a power
> cycle every few weeks[1].  I'm worried that a non-exclusive vacuum
> during recovery is causing some kind of problem.  The system was
> running fine under 7.4.8 for months, but after an upgrade to 8,
> they've been experiencing some "interesting characteristics" that led
> them to resolve the system getting wedged by "readjusting a power cord
> or toggling a power switch[1]."  Any helpful advice?  reindex?  I
> still have the original data files, but they're sizable (~30GB) and I
> haven't had a chance to really experiment.  A dump/reload has been
> done and data is sporadically missing (less than 0.01%) or in some
> cases, there are duplicate rows.  I'm guessing this is due to data
> that's been updated since some kind of index/corruption occurred
> during the previous "reboot."  fsync was off.

They run with fsync off AND they like to toggle the power switch at
random?  I'd suggest finding other employment --- they couldn't possibly
be paying you enough to justify cleaning up after stupidity as gross as
that.

Anyway, the errors appear to indicate that there are pages in the
database with LSN (last WAL location) larger than the actual current end
of WAL.  The difference is pretty large though --- at least 85MB of WAL
seems to have gone missing.  My first thought was a corrupted LSN field.
But seeing that there are at least two such pages, and given the antics
you describe above, what seems more likely is that the LSNs were correct
when written.  I think some page of WAL never made it to disk during a
period of heavy updates that was terminated by a power cycle, and during
replay we stopped at the first point where the WAL data was detectably
corrupt, and so a big chunk of WAL never got replayed.  Which of course
means there's probably a lot of stuff that needs to be fixed and did not
get fixed, but in particular our idea of the current end-of-WAL address
is a lot less than it should be.  If you have the server log from just
after the last postmaster restart, looking at what terminated the replay
might confirm this.

You could get the DB to stop complaining by doing a pg_resetxlog to push
the WAL start address above the largest "flush request" mentioned in any
of the messages.  But my guess is that you'll find a lot of internal
corruption after you do it.  Going back to the dump might be a saner way
to proceed.

			regards, tom lane


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux