On Fri, Nov 4, 2011 at 12:14 PM, Sorbara, Giorgio (CIOK) <Giorgio.Sorbara@xxxxxxx> wrote: >> How fast do you expect this to run? It's aggregating 125 million >> rows, so that's going to take some time no matter how you slice it. >> Unless I'm misreading this, it's actually taking only about 4 >> microseconds per row, which does not obviously suck. > > Well, the problem is not how fast it takes to process one row rather the best query plan I am supposed to get. I don't mean the planer is wrong but I was expecting a feature is not there (yet). > We don't have pure index scan. Fair enough. so I have approached the problem in a different way: getting rid of the degenerated dimensions and exploiting "useless" dimension table. > It's a workaround but it actually seems to work :) now I have a ~350 millions fact table and no partition but I am happy to get the data I want in 1 sec or less. Am I misreading the EXPLAIN ANALYZE output? I'm reading that to say that there were 125 million rows in the table that matched your filter condition. If that's correct, I don't think index-only scans (which will be in 9.2) are going to help you much - it might be faster, but it's definitely not going to be anything like instantaneous. On the flip side, if I *am* misreading the output and the number of rows needed to compute the aggregate is actually some very small number, then you ought to be getting an index scan even in older versions. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance