Re: same query in high number of times

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <justin@xxxxxxxxxxxxxxx> wrote:
Peter Alban wrote:
duration: 2533.734 ms  statement:

SELECT news.url_text,news.title, comments.name, comments.createdate, comments.user_id, comments.comment FROM news, comments WHERE comments.cid=news.id  AND comments.published='1' GROUP BY news.url_text,news.title comments.name, comments.createdate, comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT 3


And here is the query plan :
                                                              QUERY PLAN                                                              
----------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual time=288.525..288.528 rows=3 loops=1)
   ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual time=288.523..288.523 rows=3 loops=1)
         Sort Key: comments.createdate
         ->  HashAggregate  (cost=3253.60..3388.46 rows=13486 width=595) (actual time=137.521..148.132 rows=13415 loops=1)
               ->  Hash Join  (cost=1400.73..3051.31 rows=13486 width=595) (actual time=14.298..51.049 rows=13578 loops=1)
                     Hash Cond: ("outer".cid = "inner".id)
                     ->  Seq Scan on comments  (cost=0.00..1178.72 rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)
                           Filter: (published = 1)
                     ->  Hash  (cost=1391.18..1391.18 rows=3818 width=81) (actual time=14.268..14.268 rows=3818 loops=1)
                           ->  Seq Scan on news  (cost=0.00..1391.18 rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)

The same is being requested from different sessions . So why is it not being cached .


Because the query results are not cached only the RAW tables are.   The query is rerun every time it is requested.

What is the group by clause accomplishing??? 
The sorting and hash Aggregate is eating up all the time

So this should mean that having say a 5 mb table in memory doing such query above takes 2 secs in memory ?

Assuming that, we probably have really slow memory  :)

Besides , the query makes less sense to me , but I dont write the queries (yet) simply looking at the server side  .
So do you suggest to tune the queries or shall I rather look for other monitoring tools ?
cheers,
Peter
 


work_mem = 51024                        # min 64, size in KB

Thats allot memory dedicated to work mem if you have 30 connections open this could eat up 1.5gigs pushing the data out of cache. 








[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux