Re: Poor performance on a simple join

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 2, 2011 at 3:53 PM, CS DBA <cs_dba@xxxxxxxxxxxxxxxxxxx> wrote:
> On 11/02/2011 02:45 PM, Scott Marlowe wrote:
>>
>> On Wed, Nov 2, 2011 at 2:21 PM, CS DBA<cs_dba@xxxxxxxxxxxxxxxxxxx>  wrote:
>>>
>>> Hi All;
>>>
>>> The below contab2 table conmtains ~400,000 rows. This query should not
>>> take
>>> this long.  We've tweaked work_mem up to 50MB, ensured that the
>>> appropriate
>>> indexes are in place, etc...
>>>
>>> Thoughts?
>>>
>>> Thanks in advance
>>
>> How long should it take?  300 milliseconds is fairly fast for mushing
>> 129k rows up against 26k rows and getting 12k rows back.  That's 40
>> rows / millisecond, which isn't too bad really.
>>
>>
>> What pg version are you running?  What evidence do you have that this
>> is slow? i.e. other machines you've run it on where it's faster?  What
>> hardware (CPU, RAM, IO subsystem, OS) Are you running on?
>>
>>>
>>> Explain analyze:
>>> SELECT contab2.contacts_tab
>>> FROM contab2
>>> INNER JOIN sctab
>>>    ON sctab.id = contab2.to_service_id
>>>        AND sctab.type IN ('FService', 'FqService', 'LService',
>>> 'TService')
>>> WHERE contab2.from_contact_id=402513;
>>>
>>>  QUERY
>>> PLAN
>>>
>>> -----------------------------------------------------------------------------------------------------------------------------------------------------------
>>>  Hash Join  (cost=16904.28..25004.54 rows=26852 width=4) (actual
>>> time=302.621..371.599 rows=12384 loops=1)
>>>   Hash Cond: (contab2.to_service_id = sctab.id)
>>>   ->    Bitmap Heap Scan on contab2  (cost=1036.49..8566.14 rows=26852
>>> width=20) (actual time=5.191..32.701 rows=26963 loops=1)
>>>         Recheck Cond: (from_contact_id = 402513)
>>>         ->    Bitmap Index Scan on index_contab2_on_from_user_id
>>>  (cost=0.00..1029.78 rows=26852 width=0) (actual time=4.779..4.779
>>> rows=26963 loops=1)
>>>               Index Cond: (from_contact_id = 402513)
>>>   ->    Hash  (cost=14445.19..14445.19 rows=113808 width=16) (actual
>>> time=297.332..297.332 rows=129945 loops=1)
>>>         Buckets: 16384  Batches: 1  Memory Usage: 6092kB
>>>         ->    Bitmap Heap Scan on sctab  (cost=2447.07..14445.19
>>> rows=113808
>>> width=16) (actual time=29.480..187.166 rows=129945 loops=1)
>>>               Recheck Cond: ((type)::text = ANY
>>> ('{FService,FqService,LService,TService}'::text[]))
>>>               ->    Bitmap Index Scan on index_sctab_on_type
>>>  (cost=0.00..2418.62 rows=113808 width=0) (actual time=27.713..27.713
>>> rows=130376 loops=1)
>>>                     Index Cond: ((type)::text = ANY
>>> ('{FService,FqService,LService,TService}'::text[]))
>>>  Total runtime: 382.514 ms
>>> (13 rows)
>>>
>>> --
>>> ---------------------------------------------
>>> Kevin Kempter       -       Constent State
>>> A PostgreSQL Professional Services Company
>>>          www.consistentstate.com
>>> ---------------------------------------------
>>>
>>>
>>> --
>>> Sent via pgsql-performance mailing list
>>> (pgsql-performance@xxxxxxxxxxxxxx)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-performance
>>
> Agreed. but it's not fast enough for the client.  I think we're going to
> look at creating an aggregate table or maybe partitioning

Take a look here:
http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux