I hesitate to raise this issue again, but I've noticed something which I thought might be worth mentioning. I've never thought the performance of count(*) on a table was a significant issue, but I'm prepared to say that -- for me, at least -- it is officially and totally a NON-issue. We are replicating data from 72 source databases, each with the official copy of a subset of the data, to four identical consolidated databases, spread to separate locations, to serve our web site and other organization-wide needs. Currently, two of these central databases are running a commercial product and two are running PostgreSQL. There have been several times that I have run a SELECT COUNT(*) on an entire table on all central machines. On identical hardware, with identical data, and equivalent query loads, the PostgreSQL databases have responded with a count in 50% to 70% of the time of the commercial product, in spite of the fact that the commercial product does a scan of a non-clustered index while PostgreSQL scans the data pages. The tables have had from a few million to 132 million rows. The databases are about 415 GB each. The servers have 6 GB RAM each. We've been running PostgreSQL 8.1, tuned and maintained based on advice from the documentation and these lists. I suspect that where people report significantly worse performance for count(*) under PostgreSQL than some other product, it may sometimes be the case that they have not properly tuned PostgreSQL, or paid attention to maintenance issues regarding dead space in the tables. My recent experience, for what it's worth. -Kevin