Thalis Kalfigkopoulos <tkalfigo@xxxxxxxxx> writes: > I read somewhere that the following query gives a quick estimate of the # > of rows in a table regardless of the table's size (which would matter in a > simple SELECT count(*)?): > SELECT (CASE WHEN reltuples > 0 THEN > pg_relation_size('mytable')/(8192*relpages/reltuples) > ELSE 0 > END)::bigint AS estimated_row_count > FROM pg_class > WHERE oid = 'mytable'::regclass; This seems a bit dubious, as it's protecting against only one of two possible zero-divide conditions, and ignoring the risk of integer overflow of 8192*relpages. It also seems rather inconvenient to have to specify 'mytable' twice. I'd try something like select case when relpages > 0 then (pg_relation_size(oid)::float8 * reltuples / relpages / 8192)::bigint else 0::bigint end from pg_class where oid = 'mytable'::regclass; > If relpages & reltuples are recorded accurately each time VACUUM is run, > wouldn't it be the same to just grab directly the value of reltuples like: > SELECT reltuples FROM pg_class WHERE oid='mytable'::regclass; Uh, no. The whole point of the more complicated query is to scale the reltuples/relpages tuple-density ratio up to the table's current physical size, so that you get an estimate that is not too far off even if vacuum hasn't been run lately. It's also worth noting that in recent PG versions, reltuples and relpages are themselves only moving-average estimates, and so your premise is faulty anyway: they are *not* necessarily exact even immediately after a vacuum. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general