RE: Big performance slowdown from 11.2 to 13.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




-----Original Message-----
From: ldh@xxxxxxxxxxxxxxxxxx <ldh@xxxxxxxxxxxxxxxxxx> 
Sent: Wednesday, July 21, 2021 19:46
To: Tom Lane <tgl@xxxxxxxxxxxxx>
Cc: Peter Geoghegan <pg@xxxxxxx>; Justin Pryzby <pryzby@xxxxxxxxxxxxx>; pgsql-performance@xxxxxxxxxxxxxx
Subject: RE: Big performance slowdown from 11.2 to 13.3



-----Original Message-----
From: Tom Lane <tgl@xxxxxxxxxxxxx> 
Sent: Wednesday, July 21, 2021 19:43
To: ldh@xxxxxxxxxxxxxxxxxx
Cc: Peter Geoghegan <pg@xxxxxxx>; Justin Pryzby <pryzby@xxxxxxxxxxxxx>; pgsql-performance@xxxxxxxxxxxxxx
Subject: Re: Big performance slowdown from 11.2 to 13.3

"ldh@xxxxxxxxxxxxxxxxxx" <ldh@xxxxxxxxxxxxxxxxxx> writes:
> From: Peter Geoghegan <pg@xxxxxxx> 
>> I imagine that this has something to do with the fact that the hash aggregate spills to disk in Postgres 13.

> So how is this happening? I mean, it's the exact same query, looks like the same plan to me, it's the same data on the exact same VM etc... Why is that behavior so different?

What Peter's pointing out is that v11 never spilled hashagg hash tables to
disk period, no matter how big they got (possibly leading to out-of-memory
situations or swapping, but evidently you have enough RAM to have avoided
that sort of trouble).  I'd momentarily forgotten that, but I think he's
dead on about that explaining the difference.  As he says, messing with
hash_mem_multiplier would be a more targeted fix than increasing work_mem
across the board.

			regards, tom lane


OK, got it! That sounds and smells good. Will try later tonight or tomorrow and report back.

Thank you!
Laurent.






Hello all,

Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:
  - hash_mem_multiplier = '2'
  - work_mem = '1GB'

I tried a few other configuration, i.e., 512MB/4, 256MB/8 with similar results.

Also, you mentioned previously that the hash was spilling to disk? How are you seeing this in the plans? What should I be looking for on my end when playing around with parameters to see the intended effect?

Thank you,
Laurent.


HashAggregate  (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=70844.078..1554843.323 rows=722853 loops=1)
  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk
  Batches: 1  Memory Usage: 1277985kB
  Buffers: shared hit=14 read=169854, temp read=15777 written=27588
  ->  HashAggregate  (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=23370.026..33839.347 rows=13852618 loops=1)
        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code
        Batches: 5  Memory Usage: 2400305kB  Disk Usage: 126560kB
        Buffers: shared read=169851, temp read=15777 written=27588
        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.072..10906.894 rows=13852618 loops=1)
              Filter: ((iccqar_ques_code)::text = ANY ('{"DEBRIDEMENT DATE","DEBRIDEMENT THIS VISIT","DEBRIDEMENT TYPE","DEPTH (CM)","DEPTH DESCRIPTION","DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?","DRAIN PRESENT","DRAIN TYPE","EDGE / SURROUNDING TISSUE - MACERATION",EDGES,EPITHELIALIZATION,"EXUDATE AMOUNT","EXUDATE TYPE","GRANULATION TISSUE","INDICATE OTHER TYPE OF WOUND CLOSURE","INDICATE TYPE","INDICATE WOUND CLOSURE","IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?","LENGTH (CM)","MEASUREMENTS TAKEN","NECROTIC TISSUE AMOUNT","NECROTIC TISSUE TYPE",ODOR,"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE","OTHER COMMENTS REGARDING DRAIN TYPE","OTHER COMMENTS REGARDING PAIN INTERVENTIONS","OTHER COMMENTS REGARDING PAIN QUALITY","OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN","PAIN FREQUENCY","PAIN INTERVENTIONS","PAIN QUALITY","PERIPHERAL TISSUE EDEMA","PERIPHERAL TISSUE INDURATION","REASON MEASUREMENTS NOT TAKEN","RESPONSE TO PAIN INTERVENTIONS",SHAPE,"SIGNS AND SYMPTOMS OF INFECTION","SKIN COLOR SURROUNDING WOUND",STATE,"SURFACE AREA (SQ CM)","TOTAL NECROTIC TISSUE ESCHAR","TOTAL NECROTIC TISSUE SLOUGH",TUNNELING,"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK","TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK","TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK","TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK",UNDERMINING,"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK","UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK","UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK","UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK","WIDTH (CM)","WOUND PAIN LEVEL, WHERE 0 = \"NO PAIN\" AND 10 = \"WORST POSSIBLE PAIN\""}'::text[]))
              Rows Removed by Filter: 171680
              Buffers: shared read=169851
Settings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '2', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '1GB'
Planning:
  Buffers: shared hit=186 read=37
Planning Time: 3.667 ms
Execution Time: 1555300.746 ms




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux