Hello, We've been a satified user of PostgreSQL for several years, and use it to power a national pet adoption website: http://www.adoptapet.com/ Recently we've had a regularly-timed middle-of-the-night problem where database handles are exhausted for a very brief period. In tracking it down, I have found that the event seems to correspond to a time when a cron script is deleting from a large logging table, but I'm not certain if this is the cause or a correlation. We are deleting about 5 million rows from a time-based logging table that is replicated by Slony. We are currently using a single delete statement, which takes about 15 minutes to run. There is no RI on the table, but the use of Slony means that a trigger call and action is made for every row deleted, which causes a corresponding insertion in another table so the deletion can be replicated to the slave. My questions: - Could this kind of activity lead to an upward spiral in database handle usage? - Would it be advisable to use several small DELETE statements instead, to delete rows in batches of 1,000. We could use the recipe for this that was posted earlier to this list: delete from table where pk in (select pk from table where delete_condition limit X); Partitions seems attractive here, but aren't easy to use Slony. Perhaps once we migrate to PostgreSQL 9.0 and the hot standby feature we can consider that. Thanks for your help! Mark . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark Stosberg Principal Developer mark@xxxxxxxxxxxxxxx Summersault, LLC 765-939-9301 ext 202 database driven websites . . . . . http://www.summersault.com/ . . . . . . . . -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance