Cory Tucker <cory.tucker@xxxxxxxxx> writes: > I was issuing a query on both databases to cleanup some duplicates in > preparation of applying new indexes. On the 9.6 database with all the data > in one table, the query runs fine in about 6 min. On 10.3, with a work_mem > setting of 1GB the query runs for about 7 minutes and then gets terminated > with an out of memory error. Hm, this seems a bit excessive: MessageContext: 1333788672 total in 169 blocks; 2227176 free (9 chunks); 1331561496 used and this is really grim: 65678 more child contexts containing 47607478048 total in 2577 blocks; 12249392 free (446 chunks); 47595228656 used and this is just silly: 2018-03-28 19:20:33.264 UTC [10580] cory@match ERROR: out of memory 2018-03-28 19:20:33.264 UTC [10580] cory@match DETAIL: Failed on request of size 1610612736. Can you extract a self-contained test case that uses unreasonable amounts of memory? It seems from this trace that the wheels are coming off in at least two places, but identifying exactly where is impossible without more info. If you can't make a publishable test case, capturing a stack trace from the point of the OOM error (set the breakpoint at errfinish) would probably be enough info to figure out what is trying to grab 1.6GB in one bite. But it won't help us find out why so many empty ExprContexts are getting created. regards, tom lane