On Wed, Oct 10, 2012 at 9:09 AM, Korisk <Korisk@xxxxxxxxx> wrote: > Hello! Is it possible to speed up the plan? > Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1) > Output: name, (count(name)) > Sort Key: hashcheck.name > Sort Method: quicksort Memory: 315kB > -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1) > Output: name, count(name) > -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1) > Output: id, name, value > Total runtime: 10351.989 ms AFAIU there are no query optimization solution for this. It may be worth to create a table hashcheck_stat (name, cnt) and increment/decrement the cnt values with triggers if you need to get counts fast. -- Sergey Konoplev a database and software architect http://www.linkedin.com/in/grayhemp Jabber: gray.ru@xxxxxxxxx Skype: gray-hemp Phone: +14158679984 -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance