> Looking at > http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COSTI > wonder if I should try reducing random_page_cost? Yes, and I should speak to Heroku about reducing it by default. RPC represents the ratio between the cost of a sequential lookup of a single row vs. the cost of a random lookup. On standard spinning media on a dedicated server 4.0 is a pretty good estimate of this. However, you are running on shared storage in a cloud, which has different math. > Something that might help when it comes to advice on performance tuning is > that this database is used only for analytics. It's essentially a partial > replication of a production (document-oriented) database. So a lot of > normal operations that might employ a series of sequential fetches may not > actually be the norm in my case. Rather, I'm doing a lot of counts on data > that is typically randomly distributed. In that case, you might consider increasing default_statistics_target to 1000 and ANALYZEing your whole database. That increases the sample size for the database statstics collector, and most of the time will result in somewhat better plans on large tables and data with skewed distributions. This is not something which Heroku would do as standard, since most of their users are doing basic transactional webapps. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance