In response to tom : > Hi, > > === Problem === > > i have a db-table "data_measurand" with about 60000000 (60 Millions) > rows and the following query takes about 20-30 seconds (with psql): > > mydb=# select count(*) from data_measurand; > count > ---------- > 60846187 > (1 row) > > > === Question === > > - What can i do to improve the performance for the data_measurand table? Short answer: nothing. Long answer: PG has to check the visibility for each record, so it forces a seq.scan. But you can get an estimation, ask pg_class (a system table), the column reltuples there contains an estimated row rount. http://www.postgresql.org/docs/current/static/catalog-pg-class.html If you really needs the correct row-count you should create a TRIGGER and count with this trigger all INSERTs and DELETEs. Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance