Thanks Jim and Tom. At least now I've got a direction to head in. I think for now I'll probably reduce work_mem as a stop-gap measure to get the query running again. This will buy me some time to redesign it. I'll probably separate out each sub query and store the results in a table (would a temp table be a good solution here?) before I pull it all together with the final query. > Egad, what a mess :-(. By my count you have 89 hash joins, 24 sorts, > and 8 hash aggregations in there. In total these will feel authorized > to use 121 times work_mem. Since you've got work_mem set to 256 meg, > an out-of-memory condition doesn't seem that surprising. You need to > make work_mem drastically smaller for this query. Or else break it > down > into multiple steps. Except won't the sorts pull in all data from their underlying node before proceeding, which should free the memory from those underlying nodes? If so, it looks like it's not nearly as bad, only taking about 20x work_mem (which of course still isn't great...) -- Jim C. Nasby, Sr. Engineering Consultant jnasby@xxxxxxxxxxxxx Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461