NOW Web Sites Manager <webperson@xxxxxxx> writes: > I have a table that reports one block error when I try to select > everything, reindex, vacuum, etc. (invalid page header in block > 413620 of relation "tablename") I've read on the list that the thing > to do is use pg_dump and then restore it. However, pg_dump fails > with the same error. > I can't take down the whole server, let alone easily Postgresql, to > debug this. Any ideas as to how I could do this? I wrote a Perl > script to select the records one by one but it didn't find a bad > record although maybe it's how I wrote the script (it just selected > the records one by one.) Hm, were you selecting the records by primary key? (or some other way that would have produced an index search?) If so, the above suggests that the corrupted block is one that happens not to contain any live data, which would be a lucky break for you. Personally I'd want to know what was going on before taking any drastic measures, so I'd try to dump out the block with pg_filedump: http://sources.redhat.com/rhdb/ However, if you're comfortable that your perl script was able to retrieve everything you care about from the table, you could skip that and go directly to zeroing out the block, which is the appropriate recovery action when you don't care about the data in it. The usual way to do that is with dd: dd if=/dev/zero of=TARGETFILE bs=8k seek=413620 count=1 (Sorry, no idea how to do it if you're on Windows.) The trick is to be sure you've got the right target file. You need the OID of your database (directory name) and the relfilenode (NOT OID) of the table, which you get from pg_database and pg_class respectively. Then the file is $PGDATA/base/DBOID/RELFILENODE. Hm, wait a second, block 413620 would be past the 1-GB mark, well past it in fact. What you're actually gonna need is dd if=/dev/zero of=TARGETFILE.3 bs=8k seek=20404 count=1 ie the filename you want is RELFILENODE.3 not just RELFILENODE. You might want to read the "Database Physical Storage" chapter of TFM to be sure you follow all this before you start zapping data. regards, tom lane