Hi,
I have a londiste consumer database that has some additional user data in it.That means that 4-5 day per week, everything works fine. The data is there too, i haven't yet been able to look if anything's missing when it goes wrong.
This is the error:---------------------
pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for index 231808363
pg_dump: [archiver (db)] query was: SELECT t.tableoid, t.oid, t.relname AS indexname, pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, t.relnatts AS indnkeys, i.indkey, i.indisclustered, false AS indisreplident, t.relpages, c.contype, c.conname, c.condeferrable, c.condeferred, c.tableoid AS contableoid, c.oid AS conoid, pg_catalog.pg_get_constraintdef(c.oid, false) AS condef, (SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS tablespace, t.reloptions AS indreloptions FROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid) LEFT JOIN pg_catalog.pg_constraint c ON (i.indrelid = c.conrelid AND i.indexrelid = c.conindid AND c.contype IN ('p','u','x')) WHERE i.indrelid = '231800968'::pg_catalog.oid AND i.indisvalid AND i.indisready ORDER BY indexname
---------------------
The index number is a different one every time.
The log says the same thing.
interesting detail:
On the days that i have successful dumps from, the last dump is finished within 1:15 hours.
But the errors occur at very different times, sometimes 14 hours after the job started (or maybe it started on a different day).
But munin doesn't show clear peak loads on the server.
And this is the script:
---------------------
#!/bin/bash
HOST=localhost
PGPORT=5432
BACKUPDIR=/data/dump/afolder
DATABASE=adatabase
SCHEMAS=`psql -p $PGPORT $DATABASE -t --command "select schema_name from information_schema.schemata where schema_name LIKE 'oz_%'"`
for SCHEMA in $SCHEMAS
do
# mv $BACKUPDIR/$SCHEMA.backup.1 $BACKUPDIR/$SCHEMA.backup.2
mv $BACKUPDIR/$SCHEMA.backup $BACKUPDIR/$SCHEMA.backup.1
pg_dump -Fc -Z3 -p $PGPORT -n $SCHEMA $DATABASE > $BACKUPDIR/$SCHEMA.backup
done
---------------------
---------------------
#!/bin/bash
HOST=localhost
PGPORT=5432
BACKUPDIR=/data/dump/afolder
DATABASE=adatabase
SCHEMAS=`psql -p $PGPORT $DATABASE -t --command "select schema_name from information_schema.schemata where schema_name LIKE 'oz_%'"`
for SCHEMA in $SCHEMAS
do
# mv $BACKUPDIR/$SCHEMA.backup.1 $BACKUPDIR/$SCHEMA.backup.2
mv $BACKUPDIR/$SCHEMA.backup $BACKUPDIR/$SCHEMA.backup.1
pg_dump -Fc -Z3 -p $PGPORT -n $SCHEMA $DATABASE > $BACKUPDIR/$SCHEMA.backup
done
---------------------
It seems like an index gets deleted while pg_dump has it in some task list, and by the time pg_dump wants to dump it, it's gone.
But that should not be possible, because of transactions.
Does anyone know what's up?
--
Willy-Bas Loos