> We might even consider taking experts advice on how to tune queries and > server, but if postgres is going to behave like this, I am not sure we would > be able to continue with it. > > Having said that, I would day again that I am completely new to this > territory, so I might miss lots and lots of thing. My two cents: Postgres out of the box might not be a good choice for data warehouse style queries, that is because it is optimized to run thousands of small queries (OLTP style processing) and not one big monolithic query. I've faced similar problems myself before and here are few tricks i followed to get my elephant do real time adhoc analysis on a table with ~45 columns and few billion rows in it. 1. Partition your table! use constraint exclusion to the fullest extent 2. Fire multiple small queries distributed over partitions and aggregate them at the application layer. This is needed because, you might to exploit all your cores to the fullest extent (Assuming that you've enough memory for effective FS cache). If your dataset goes beyond the capability of a single system, try something like Stado (GridSQL) 3. Storing index on a RAM / faster disk disk (using tablespaces) and using it properly makes the system blazing fast. CAUTION: This requires some other infrastructure setup for backup and recovery 4. If you're accessing a small set of columns in a big table and if you feel compressing the data helps a lot, give this FDW a try - https://github.com/citusdata/cstore_fdw -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance