Search Postgresql Archives

Re: Does PostgreSQL check database integrity at startup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Dec 28, 2017 at 1:26 PM, Stephen Frost <sfrost@xxxxxxxxxxx> wrote:
Greetings Brent,

* Brent Wood (pcreso@xxxxxxxxx) wrote:
> A tool to calculate a checksum of sorts based on the table (file) content would provide a better surety of duplication than simply checking file size - like differently vacuumed tables in each copy could have the same content but be different file sizes.

PG has support for checksums and there are tools out there to validate
that the checksum is correct for all pages which have one, but that
wouldn't help in this case because the file is zero'd out (and a zero'd
out file is actually a valid file in a PG data directory).

Also, the files on the primary and the replica actually can be different
when looked at with a complete-file checksum due to hint bits being set
differently (at least, possibly other ways too).  That doesn't make them
invalid or incorrect though.

In addition to what Stephen and everyone else said, it is impossible to get a valid snapshot of the whole "file" on a running server without locking the relation and reading it through the PostgreSQL buffer cache. On data files such as heap and index, PostgreSQL does extensive write caching. Preventing data loss from write caching is a primary purpose of WAL. Write caching in the application (PostgreSQL in this case) prevents the OS from actually knowing the correct "logical" state of the file at any given point in time. This means that even a LVM snapshot will not give you consistent data files of a running server, because the not yet written changes (in shared buffers) waiting for a checkpoint to force them into OS buffers won't be visible from outside PostgreSQL.


Regards, Jan



such a check to verify that the files backed up during a backup have the
same checksum as the files being restored from that backup can be done,
and that *is* done in at least some of the PG backup tools already
(pgBackRest has an independent manifest that it stores for each backup
which contains the checksum of each file as-backed-up, and it verifies
that checksum when performing a restore to make sure that the backed up
file wasn't corrupted in place, other tools hopefully have similar).

> I do wonder though - given the order of records in a table (file) is not necessarily identical (or is it?) event this may be problematic. Perhaps a checksum based on the result of a query output ordered by primary key could work?

The order of records in a *file* should be the same in the heap on the
primary as they are on the replica, but that doesn't mean the contents
of those files will be exactly the same (as mentioned above, hint bits
can differ).  We used to have cases where the indexes could also be
different, but I believe that was changed so they should match.

I've used the approach of doing a checksum across the results of an
ordered query to compare between systems and that generally does work,
but it's a bit tricky if you're trying to compare a table that's heavily
modified- you need to determine the point in the WAL stream that you're
at on the primary when you run the query and then replay the replica to
that point in the WAL and then run the query on the replica, otherwise
you could end up with differences that are just because of the ongoing
transactions being run to update the table.

Thanks!

Stephen



--
Jan Wieck
Senior Postgres Architect

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux