-----Original Message----- From: David Rowley <dgrowleyml@xxxxxxxxx> Sent: Thursday, July 22, 2021 12:18 To: Peter Geoghegan <pg@xxxxxxx> Cc: Tom Lane <tgl@xxxxxxxxxxxxx>; Jeff Davis <pgsql@xxxxxxxxxxx>; ldh@xxxxxxxxxxxxxxxxxx; Justin Pryzby <pryzby@xxxxxxxxxxxxx>; pgsql-performance@xxxxxxxxxxxxxx Subject: Re: Big performance slowdown from 11.2 to 13.3 On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan <pg@xxxxxxx> wrote: > > On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > > That is ... weird. Maybe you have found a bug in the spill-to-disk > > logic; it's quite new after all. Can you extract a self-contained > > test case that behaves this way? > > I wonder if this has something to do with the way that the input data > is clustered. I recall noticing that that could significantly alter > the behavior of HashAggs as of Postgres 13. Isn't it more likely to be reaching the group limit rather than the memory limit? if (input_groups * hashentrysize < hash_mem * 1024L) { if (num_partitions != NULL) *num_partitions = 0; *mem_limit = hash_mem * 1024L; *ngroups_limit = *mem_limit / hashentrysize; return; } There are 55 aggregates on a varchar(255). I think hashentrysize is pretty big. If it was 255*55 then only 765591 groups fit in the 10GB of memory. David ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hello, So, FYI.... The query I shared is actually a simpler use case of ours 😊 We do have a similar pivot query over 600 columns to create a large flat tale for analysis on an even larger table. Takes about 15mn to run on V11 with strong CPU usage and no particular memory usage spike that I can detect via TaskManager. We have been pushing PG hard and simplify the workflows of our analysts and data scientists downstream. Thank you, Laurent.