On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK) <Giorgio.Sorbara@xxxxxxx> wrote: > Group (cost=0.00..4674965.80 rows=200 width=17) (actual time=13.375..550943.592 rows=1 loops=1) > -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual time=13.373..524324.817 rows=125595932 loops=1) > -> Index Scan using f_suipy_pkey on f_suipy (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0 loops=1) > Index Cond: ((fk_theme)::text = 'main_py_six_scxc'::text) > -> Seq Scan on f_suipy_main_py_six_scxc f_suipy (cost=0.00..4360970.30 rows=125595944 width=17) (actual time=13.352..495259.117 rows=125595932 loops=1) > Filter: ((fk_theme)::text = 'main_py_six_scxc'::text) > Total runtime: 550943.699 ms How fast do you expect this to run? It's aggregating 125 million rows, so that's going to take some time no matter how you slice it. Unless I'm misreading this, it's actually taking only about 4 microseconds per row, which does not obviously suck. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance