On Tue, Jan 14, 2020 at 5:29 AM Israel Brewster <ijbrewster@xxxxxxxxxx> wrote: > > I was working on diagnosing a “slow” (about 6 second run time) query: > > SELECT > to_char(bucket,'YYYY-MM-DD"T"HH24:MI:SS') as dates, > x_tilt, > y_tilt, > rot_x, > rot_y, > date_part('epoch', bucket) as timestamps, > temp > FROM > (SELECT > time_bucket('1 week', read_time) as bucket, > avg(tilt_x::float) as x_tilt, > avg(tilt_y::float) as y_tilt, > avg(rot_x::float) as rot_x, > avg(rot_y::float) as rot_y, > avg(temperature::float) as temp > FROM tilt_data > WHERE station='c08883c0-fbe5-11e9-bd6e-aec49259cebb' > AND read_time::date<='2020-01-13'::date > GROUP BY bucket) s1 > ORDER BY bucket; > > In looking at the explain analyze output, I noticed that it had an “external merge Disk” sort going on, accounting for about 1 second of the runtime (explain analyze output here: https://explain.depesz.com/s/jx0q). Since the machine has plenty of RAM available, I went ahead and increased the work_mem parameter. Whereupon the query plan got much simpler, and performance of said query completely tanked, increasing to about 15.5 seconds runtime (https://explain.depesz.com/s/Kl0S), most of which was in a HashAggregate. > > I am running PostgreSQL 11.6 on a machine with 128GB of ram (so, like I said, plenty of RAM) > > How can I fix this? Thanks. I have noticed that after increasing the work_mem your plan has switched from a parallel plan to a non-parallel plan. Basically, earlier it was getting executed with 3 workers. And, after it becomes non-parallel plan execution time is 3x. For the analysis can we just reduce the value of parallel_tuple_cost and parallel_setup_cost and see how it behaves? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com