On 2013-11-04 13:56, Kevin Grittner wrote:
Caio Casimiro <casimiro.listas@xxxxxxxxx> wrote:
I have one query running at ~ 7 seconds and I would like to know
if it's possible to make it run faster, once this query runs lots
of time in my experiment.
Buffers: shared hit=2390 read=32778
Total runtime: 24066.145 ms
effective_cache_size = 2GB
it seems the problem is with the 'tweet' table.
The EXPLAIN ANALYZE output shows it taking 24 seconds, 8.9 seconds
of which is in accessing the tweet_topic table and 15.1 seconds in
accessing the tweet table. It looks like you have a painfully low
cache hit ratio. The plan looks reasonable to me; it looks like
you need more RAM to cache data if you want better speed.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
There's also an index scan that turns up 1.8 million rows, but only
1,600 of them wind up in the final output. I'd start with restating the
"user_id in (select followed_id ...)" as a join against the relationship
table. The planner is filtering first on the tweet time, but that
doesn't reduce the set of tweets down very well. Assuming that the user
being looked up doesn't follow a large proportion of other users, I'd
figure that reducing the set first by followed users should be quicker.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance