Edmundo Robles <edmundo@xxxxxxxxxxxx> writes: > I mean, to verify the integrity of backup i do: > gunzip -c backup_yesterday.gz | pg_restore -d my_database && echo > "backup_yesterday is OK" > but my_database's size, uncompresed, is too big more than 15G and > sometimes i have no space to restore it, so always i must declutter my > disk first. > Will be great to have a dry run option, because the time to verify > reduces a lot and will save space on disk, because just execute with no > write to disk. What do you imagine a dry run option would do? If you just want to see if the file contains obvious corruption, you could do pg_restore file >/dev/null and see if it prints any complaints on stderr. If you want to have confidence that the file would actually restore (and that there aren't e.g. unique-index violations or foreign-key violations in the data), I could imagine a mode where pg_restore wraps its output in "begin" and "rollback". But that's not going to save any disk space, or time, compared to doing a normal restore into a scratch database. I can't think of any intermediate levels of verification that wouldn't involve a huge amount of work to implement ... and they'd be unlikely to catch interesting problems in practice. For instance, I doubt that syntax-checking but not executing the SQL coming out of pg_restore would be worth the trouble. If an archive is corrupt enough that it contains bad SQL, it probably has problems that pg_restore would notice anyway. Most of the restore failures that we hear about in practice would not be detectable without actually executing the commands, because they involve problems like issuing commands in the wrong order. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general