I'm running a labour-intensive series of queries on a medium-sized dataset (~100,000 rows) with geometry objects and both gist and btree indices. The queries are embedded in plpgsql, and have multiple updates, inserts and deletes to the tables as well as multiple selects which require the indices to function correctly for any kind of performance. My problem is that I can't embed a vacuum analyze to reset the indices and speed up processing, and the queries get slower and slower as the un-freed space builds up. >From my understanding, transaction commits within batches are not allowed (so no vacuum embedded within queries). Are there plans to change this? Is there a way to reclaim dead space for tables that have repeated inserts, updates and deletes on them? I have tried a simple analyze, and this does not quite cut it. I'm getting seq-scans after the first round of processing instead of hitting the index correctly. My apologies if this is directed at the wrong forum, and thank you for your help. -cris pond -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance