On 2010-10-12 19:07, Tom Lane wrote:
Anyway, if anyone is hot to make COUNT(*) faster, that's where to look. I don't think any of the previous discussion in this thread is on-point at all, except for the parts where people suggested avoiding it.
I would have to say that allthough it is nice to get count(*) faster I think your testing is way too simple. It pretty much proves that in terms of the code involved in the count(*) process there is not much to be achieved. But your table has way to little payload. As PG currently is it will start by pushing data off to TOAST when the tuple size reaches 1KB and the speed of count(*) is very much dominated by the amount of "dead weight" it has to draw in together with the heap-access for the row on accessing the table. Creating a case where the table is this slim is (in my viewpoint) very much to the extreme on the small side. Just having 32 bytes bytes of "payload" would more or less double you time to count if I read you test results correctly?. .. and in the situation where diskaccess would be needed .. way more. Dividing by pg_relation_size by the amout of tuples in our production system I end up having no avg tuple size less than 100bytes. .. without having complete insigt.. a visibillity map that could be used in conjunction with indices would solve that. What the cost would be of maintaining it is also a factor. Jesper -- Jesper -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance