Chris wrote:
Stut wrote:
Chris wrote:
Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 plus
rows. It's been so large that I've just been testing it with a limit
0, 10000 (ten thousand) on the query. That used to take about 10
minutes to run, including processing time in PHP which spits out xml
from the query. I decided to chunk the query down into 1,000 row
increments, and tried that. The script processed 10,000 rows in 23
seconds! I was amazed! But unfortunately it takes quite a bit
longer than 6*23 to process the 60,000 rows that way (1,000 at a
time). It takes almost 8 minutes. I can't figure out why it takes
so long, or how to make it faster. The data for 60,000 rows is
about 120mb, so I would prefer not to use a temporary table. Any
other suggestions? This is probably more a db issue than a php
issue, but I thought I'd try here first.
Sounds like missing indexes or something.
Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
If that were the case I wouldn't expect limiting the number of rows
returned to make a difference since the actual query is the same.
Actually it can. I don't think mysql does this but postgresql does take
the limit/offset clauses into account when generating a plan.
http://www.postgresql.org/docs/current/static/sql-select.html#SQL-LIMIT
Not really relevant to the problem though :P
How many queries do you run with an order? But you're right, if there is
no order by clause adding a limit probably will make a difference, but
there must be an order by when you use limit to ensure the SQL engine
doesn't give you the same rows in response to more than one of the queries.
-Stut
--
http://stut.net/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php