Re: Slow query with a lot of data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 18 Aug 2008, Moritz Onken wrote:
Running the query for more than one user is indeed not much slower. That's what I need. I'm clustering the results table on domain right now. But why is this better than clustering it on "user"?

The reason is the way that the merge join algorithm works. What it does is takes two tables, and sorts them both by the join fields. Then it can stream through both tables producing results as it goes. It's the best join algorithm, but it does require both tables to be sorted by the same thing, which is domain in this case. The aggregating on user happens after the join has been done, and the hash aggregate can accept the users in random order.

If you look at your last EXPLAIN, see that it has to sort the result table on domain, although it can read the domain_categories in domain order due to the clustered index.

"HashAggregate
"  ->  Merge Join
"        Merge Cond: (b.domain = a.domain)"
"        ->  Index Scan using domain_categories_domain on domain_categories b
"        ->  Sort
"              Sort Key: a.domain"
"              Sort Method:  quicksort  Memory: 27kB"
" -> Index Scan using result_user_idx on result a " Index Cond: ("user" = 1337)"

Without the user restriction and re-clustering, this should become:

"HashAggregate
"  ->  Merge Join
"        Merge Cond: (b.domain = a.domain)"
"        ->  Index Scan using domain_categories_domain on domain_categories b
"        ->  Index Scan using result_domain on result a

Matthew

--
Vacuums are nothings. We only mention them to let them know we know
they're there.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux