Tom Lane writes: > I'd bet on the extra time being in I/O for the per-batch temp files, since it's hard > to see what else would be different if the data were identical in each run. > Maybe the kernel is under memory pressure and is dropping the file data from > in-memory disk cache. Or maybe it's going to disk all the time but the slow runs > face more I/O congestion. > > Personally, for a problem of this size I'd increase work_mem enough so you > don't get multiple batches in the first place. Tom thanks for the response. I'm very much a novice in this area - what do you mean by problem of this size, i.e. number of rows, hash memory usage? Does 'shared read' mean either 1) it was read from disk or 2) it was read from in-memory disk cache?