Search Postgresql Archives

Re: delete is getting hung when there is a huge data in table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> still this delete operation is not working and not a single row has been deleted from the table.

This is an intended effect of ACID properties compliance = transactions are guaranteed to atomically show their effects upon completion or do nothing at all. This also applies to multiple data-changing statements if they're wrapped into a transaction. http://en.wikipedia.org/wiki/ACID. Postgres just fully implements that through http://en.wikipedia.org/wiki/Multiversion_concurrency_control



As to why it takes forever, you're not providing enough information for a solid hypothesis.

Speculation (8 million rows is not that big):

- very slow and database-unfriendly storage stack (RAID 5 perhaps?)
- missing indexes to match your clauses (on modern hardware a full table scan over 8 million rows should complete in the order of minutes. Try to run a select with the same clauses) - have you checked if there are other processes holding exclusive locks on the table/some of the records you're trying to delete? pg_lock and pg_stat_activity are your friends here - are there triggers on the table that could be slow/hitting locks in other tables?


Regardless, you might want to split your statement into smaller chunks through less greedy clauses and see what happens.


Regards


F





On 03/05/15 15:24, Mitu Verma wrote:
Hi,

I am facing an issue with the deletion of huge data.
We have a cronscript which is used to delete the data of last 3 months from one of the tables.
Data in the table is large (8872597 as you can see the count below) since it is from last 3 months.

fm_db_Server3=# select count(*) from audittraillogentry ;


   count
---------
8872597
(1 row)

Now issue is that when this script for the deletion of data is launched , it is taking more than 7 days and doing nothing i.e not a single row has been deleted.

Then we stopped the script,terminated the database sessions by using SELECT pg_terminate_backend(proc pid) and run the following command

delete from audittraillogentry where intime <= to_timestamp('2015-01-30 23:59:59.999', 'YYYY/MM/DD-HH24:MI:SS.FF3') OR outtime  <= to_timestamp('2015-01-30 23:59:59.999', 'YYYY/MM/DD-HH24:MI:SS.FF3');
still this delete operation is not working and not a single row has been deleted from the table.

Now i have following questions -

1. If postgreSQL has some limitations for deletion of large data?
2. If i should run the vacumm, after stopping the cron script ? because probably to get the "smaller" table?
3. if dropping the indexes can help here? now sure.
4.if i should think about partitioning , if there is any limitation while delaing with large data in postgreSQL?

regards
Mitu

_____
___________________________________




--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux