RE: Big performance slowdown from 11.2 to 13.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html) and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.

 

 

From: Vijaykumar Jain <vijaykumarjain.github@xxxxxxxxx>
Sent: Thursday, July 22, 2021 16:32
To: ldh@xxxxxxxxxxxxxxxxxx
Cc: Justin Pryzby <pryzby@xxxxxxxxxxxxx>; pgsql-performance@xxxxxxxxxxxxxx
Subject: Re: Big performance slowdown from 11.2 to 13.3

 

Just asking, I may be completely wrong.

 

is this query parallel safe?

can we force parallel workers, by setting low parallel_setup_cost or otherwise to make use of scatter gather and Partial HashAggregate(s)?

I am just assuming more workers doing things in parallel, would require less disk spill per hash aggregate (or partial hash aggregate ?) and the scatter gather at the end.

 

I did some runs in my demo environment, not with the same query, some group by aggregates  with around 25M rows, and it showed reasonable results, not too off.

this was pg14 on ubuntu.

 

 

 

 


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux