On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <faheem@xxxxxxxxxxxxx> wrote: > Sure, but define sane setting, please. I guess part of the point is that I'm > trying to keep memory low, and it seems this is not part of the planner's > priorities. That it, it does not take memory usage into consideration when > choosing a plan. If that it wrong, let me know, but that is my > understanding. I don't understand quite why you're confused here. We've already explained to you that the planner will not employ a plan that uses more than the amount of memory defined by work_mem for each sort or hash. Typical settings for work_mem are between 1MB and 64MB. 1GB is enormous. >>>> You might need to create some indices, too. >>> >>> Ok. To what purpose? This query picks up everything from the >>> tables and the planner does table scans, so conventional wisdom >>> and indeed my experience, says that indexes are not going to be so >>> useful. >> >> There are situations where scanning the entire table to build up a >> hash table is more expensive than using an index. Why not test it? > > Certainly, but I don't know what you and Robert have in mind, and I'm not > experienced enough to make an educated guess. I'm open to specific > suggestions. Try creating an index on geno on the columns that are being used for the join. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance