Re: Performance issues migrating from 743 to 826

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gregory/All,

Thanks for your time.

Yes the difference is pretty small but does seem to be consistent, the problem that I have is that this is just part of the query, I have tried to break things down so that I can see where the time is being spent.  I set the  default_statistics_target to 1000 after going via 100 but it seemed to make no difference.

I have a confession to make though,  this is not like for like. I did in fact have to add a couple of indexes to the data as the performance was so bad with 8.2.6.  Very sorry for that, it doesn't help. The actual difference if from 2ms to 57ms when these indexes are removed which is  much more significant.  Here is the like for like comparison with 8.2.6, the indexes were added to the market_group_relation table since it is doing a seq scan at the very end.

"Nested Loop  (cost=0.00..54.03 rows=1 width=458) (actual time=0.279..57.457 rows=14 loops=1)"
"  Join Filter: (mgr.market_group_id = mgpr.market_group_id)"
"  ->  Nested Loop  (cost=0.00..29.19 rows=1 width=439) (actual time=0.102..4.867 rows=189 loops=1)"
"        ->  Nested Loop  (cost=0.00..28.91 rows=1 width=358) (actual time=0.095..3.441 rows=189 loops=1)"
"              ->  Nested Loop  (cost=0.00..20.60 rows=1 width=327) (actual time=0.082..1.639 rows=189 loops=1)"
"                    ->  Nested Loop  (cost=0.00..9.95 rows=1 width=147) (actual time=0.054..0.138 rows=27 loops=1)"
"                          ->  Seq Scan on market mrkt  (cost=0.00..1.65 rows=1 width=87) (actual time=0.020..0.020 rows=1 loops=1)"
"                                Filter: ((live <> 'X'::bpchar) AND (market_id = 10039))"
"                          ->  Index Scan using accommodation_price_panel_idx1 on accommodation_price_panel app  (cost=0.00..8.30 rows=1 width=60) (actual time=0.029..0.079 rows=27 loops=1)"
"                                Index Cond: ((contract_id = 16077) AND ((code)::text = 'LONHRL'::text) AND (code_type = 'IS'::bpchar))"
"                                Filter: (live <> 'X'::bpchar)"
"                    ->  Index Scan using daily_rates_pkey on daily_rates dr  (cost=0.00..10.63 rows=1 width=180) (actual time=0.021..0.041 rows=7 loops=27)"
"                          Index Cond: ((app.accommodation_price_panel_id = dr.accommodation_price_panel_id) AND (dr.room_type = 'Zk'::bpchar) AND (dr.board_type = 'BB'::bpchar) AND (dr.min_group_size = 0))"
"                          Filter: (((start_date >= '2008-05-22'::date) AND (start_date <= '2008-05-31'::date)) OR (('2008-05-22'::date >= start_date) AND ('2008-05-22'::date <= end_date)))"
"              ->  Index Scan using market_group_price_relation_pkey on market_group_price_relation mgpr  (cost=0.00..8.30 rows=1 width=35) (actual time=0.005..0.006 rows=1 loops=189)"
"                    Index Cond: (app.accommodation_price_panel_id = mgpr.price_panel_id)"
"        ->  Index Scan using market_group_pkey on market_group mg  (cost=0.00..0.27 rows=1 width=81) (actual time=0.003..0.004 rows=1 loops=189)"
"              Index Cond: (mgpr.market_group_id = mg.market_group_id)"
"              Filter: (live <> 'X'::bpchar)"
"  ->  Seq Scan on market_group_relation mgr  (cost=0.00..24.46 rows=30 width=31) (actual time=0.068..0.259 rows=30 loops=189)"
"        Filter: (10039 = market_id)"
"Total runtime: 57.648 ms"



Gregory Stark wrote:
"Matthew Lunnon" <mlunnon@xxxxxxxxxxxxx> writes:

  
In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.
    

The difference between 2ms and 6ms is pretty negligable. A single context
switch or disk cache miss could throw the results off by that margin in either
direction.

But what plan does 7.4.3 come up with if you set enable_hashjoins = off? I'm
curious whether it comes up with the same nested loops plan as 8.2 and what
cost it says it has.
  
I'll investigate and let you know.
I think you need to find queries which take longer to have any reliable
performance comparisons. Note that the configuration parameters here aren't
the same at all, it's possible the change of effective_cache_size from 800k to
2GB is what's changing the cost estimation. I seem to recall a change in the
arithmetic for calculatin Nested loop costs too which made it more aggressive
in estimating cache effectiveness.

Incidentally, default_statistics_target=1000 is awfully aggressive. I found in
the past that that caused the statistics table to become much larger and much
slower to access. It may have caused some statistics to be toasted or it may
have just been the sheer volume of data present. It will also make your
ANALYZEs take a lot longer. I would suggest trying 100 first and incrementally
raising it rather than jumping straight to 1000. And preferably only on the
columns which really matter.

  

-- 
Matthew Lunnon
Technical Consultant
RWA Ltd.

 mlunnon@xxxxxxxxxxxxx
 Tel: +44 (0)29 2081 5056
 www.rwa-net.co.uk
--

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux