Yes, I am trying to full vacuum.
I think, I found the problem. There were more then 10 million records which my scripts trying to delete at night from a table while other processes were adding new entries to same table.
Delete can't finish in the day, and script tries to delete the table again in next night via crontab
then this cause to dead lock?
SQL query: "delete from mytable where createtime < now - '10 days';
We added 1K limit option to delete query in a for loop. Now I can delete all entries but slowly
DELETE FROM mytable WHERE id = any (array(SELECT id FROM mytable WHERE createtime < NOW() - INTERVAL '10 day' LIMIT 1000))
I am planning to partion(day based) this large table, then I will simply drop the tables after certain days.
Do you have any other suggestion?
Thanks.
On Sat, Feb 6, 2010 at 8:11 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Ibrahim Harrani <ibrahim.harrani@xxxxxxxxx> writes:vacuumdb -f you mean? An ordinary vacuum wouldn't be trying to take
> I got a deadlock while vacuuming all databases with vacuumdb command.
> vacuumdb: vacuuming of database "mydb" failed: ERROR: deadlock detected
> DETAIL: Process 1294 waits for AccessExclusiveLock on relation 2662 of
> database ; blocked by process 1807.
> Process 1807 waits for AccessShareLock on relation 1259 of database 16389;
> blocked by process 1294.
AccessExclusiveLock.
It might be that you have an instance of a failure that was identified
just a couple weeks ago:
http://archives.postgresql.org/pgsql-committers/2010-01/msg00199.php
regards, tom lane