I am not sure how big your table is one way we implemented here was we selected the clean rows and outputted it to a csv file. And the rows affected we had to load from the backup, luckily we had the clean backup.
Ex: assume you have 1,2,3,4,5....100 rows and the corrupted is between 60-70. I outputted clean rows from 1-59 and 71-100 to a csv file and loaded in a new table. The corrupted was loaded back from a table. This just One of doing it. There might be more the experts here can answer very well. I am interested to see others answers as well.
My way is time consuming and if you have a very large table or tables affected it's a nightmare to fix them.
Good luck with your recovery. Thanks Deepak
Deepak, Tom thanks for answering. Tom, we have psql 8.1.18. So you are right, this weird message is because of the old version. I will check with my colleague about the possible reasons. What can I do if there is a messed up table?
Regards, Deniz On Sat, Jul 30, 2011 at 11:45 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Deniz Atak < denizatak@xxxxxxxxx> writes:
> I am using postgresql on Glassfish server and I have EJB 3.0 for ORM. I am
> trying to run a query in PSQL but receiving following error:
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services -
> 2.0.0.v20091031-r5713): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: org.postgresql.util.PSQLException: ERROR: could not read
> block 4707 of relation 1663/16384/16564: Success
What Postgres server version is that?
If it's 8.2 or older, this probably indicates a partial block at the end
of the file. Newer versions produce a more sensible error message for
the case, but that's just cosmetic --- the real problem is a messed-up
table. Have you had a filesystem corruption or an out-of-disk-space
condition on this machine?
regards, tom lane
|