On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote: > Hi all, > > I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB > of RAM. When running pg_dump on a specific table, I get the following > error: > > pg_dump: Dumping the contents of table "x_20131111" failed: > PQgetResult() failed. > pg_dump: Error message from server: ERROR: invalid memory alloc request > size 18446744073709551613 > pg_dump: The command was: COPY public.x_20131111 (...) TO stdout; > pg_dump: [parallel archiver] a worker process died unexpectedly > > If I run a COPY TO file from psql I get the same error. > > Is this an indication of corrupted data? What steps should I take? In my experience, issues like this are caused by a corrupted varlena header (i.e. corruption in text/varchar/... columns). How exactly that corruption happened is difficult to say - it might be a faulty hardware (RAM, controller, storage), it might be a bug (e.g. piece of memory gets overwritten by random data). Or it might be a consequence of incorrect hardware configuration (e.g. leaving the on-disk write cache enabled). If you have a backup of the data, use that instead of recovering the data from the current database - it's faster and safer. However, it might be worth spending some time analyzing the corruption to identify the cause, so that you can prevent it next time. The are tools that might help you with that - "pageinspect" extension is a way to look at the data files on a low-level. It may be quite tedious, though, and it may not work with badly broken data. Another option is "pg_check" - an extension I wrote a few years back. It analyzes the data file and prints info on all corruption occurences. It's available at https://github.com/tvondra/pg_check and I just pushed some minor fixes to make it 9.3-compatible. regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general