On 7/01/2010 5:03 AM, shulkae wrote:
I am writing a shell script which runs as a cron entry. The objective is to delete older records from postgresql DB. I have thousands of records. What is the optimum number of records to delete in one delete command ( my script will delete records in a loop and I want to ensure that the swap file created during delete will not cause delete failure during low disk space condition )
The only thing that'd use much memory for deletes would be queued referential integrity trigger checks. Are you removing many, many millions of records from a table which is the _target_ of a foreign key relationship (ie the deleted-from table has the referenced primary key)? If not, you shouldn't need to worry about memory.
If you *are* deleting from a big table that is the target of a foreign key relationship, make sure you have an index on the foreign key in referencing table(s) (the one(s) you're not deleting from). That will help Pg apply the RI checks much more quickly, though it shouldn't affect memory use.
If you're worried about the OS's swap file growing, lock it to a fixed size (Windows), use a swap partition (most UNIXes) or give Pg its own partition.
-- Craig Ringer
thanks Shul
-- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general