On Thu, 14 Jan 2010, tom wrote:
i have a db-table "data_measurand" with about 60000000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
----------
60846187
(1 row)
Sounds pretty reasonable to me. Looking at your table, the rows are maybe
200 bytes wide? That's 12GB of data for Postgres to munch through. 30
seconds is really rather quick for that (400MB/s). What sort of RAID array
is managing to give you that much?
I use a software raid and LVM for Logical Volume Management. Filesystem
is ext3
Ditch lvm.
This is an FAQ. Counting the rows in a table is an expensive operation in
Postgres. It can't be answered directly from an index. If you want, you
can keep track of the number of rows yourself with triggers, but beware
that this will slow down write access to the table.
Matthew
--
Nog: Look! They've made me into an ensign!
O'Brien: I didn't know things were going so badly.
Nog: Frightening, isn't it?
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance