On Mittwoch 18 Februar 2009 Scott Marlowe wrote: > Look into zero damaged pages setting. Thanks for the hint Scott, I tried it and did: select * into mb from dbmail_messageblks; WARNING: invalid page header in block 973075 of relation "pg_toast_1281127"; zeroing out page ERROR: missing chunk number 0 for toast value 1623370 in pg_toast_1281127 That missing chunks still stop the query. I found out what messageblks are destroyed, and can get out all the rest. There was something interesting: As the data in that table is about 10GB and therefore time consuming to read, I tried reading only small parts of that big "messageblk" field of type "bytea", which is the longest field. So I tried reading every row with char_length(messageblk,5) and on success noted that block as good. But upon reading there still could be errors, meaning you must really test to read the full messageblk field. Just if someone else needs to perform such a test. Also a question: Because I must read all data, the psql client runs out of memory, trying to cache all the 10GB from that table. I circumvented this with selecting only parts of the table all the time. Is there a smart way to do such a select without caching the results in memory? Is that what temporary tables and "select into" are made for? I just want to know the recommended way for doing huge queries. mfg zmi -- // Michael Monnerie, Ing.BSc ----- http://it-management.at // Tel: 0660 / 415 65 31 .network.your.ideas. // PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import" // Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4 // Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
Attachment:
signature.asc
Description: This is a digitally signed message part.