Always found the same myself on large datasets and when working with high traffic sites; but cant replicate in a non-live environment or with simple grinder style tests; so just follow the procedure as standard practise in all my code now - likewise with dropping keep alive times on apache servers and some other little tweaks. regards! Michael Schaefer wrote: > Interesting, I don't have any numerical benchmarks, but I saw a > performance benefit moving my result set into an array. > > My case may have been extreme, I was creating a table 350 rows by 350 > columns, several megabytes of output, and I found that building the > output directly from the query too slow. > > When I moved my results into an array before I started, it moved to a > reasonable build time. > > My theory was that I was not getting proper use out of my indexes in my > data set, or that the interface I was using to get to my MS-SQL tables > was inefficient in some other way. > > I may not have "proof", but I will be inclined to use this for a > performance boost when working with large sets of data. > > > > On 1/21/2010 6:45 PM, Nathan Rixham wrote: >> Shawn McKenzie wrote: >> >>> Daevid Vincent wrote: >>> >>>>> -----Original Message----- >>>>> From: Nathan Rixham [mailto:nrixham@xxxxxxxxx] >>>>> >>>>> you'll also find a performance upgrade if you load all sql >>>>> results in to >>>>> an array and close up the query / free the results before >>>>> working on them. >>>>> >>>>> query >>>>> for() { >>>>> $results[] = $row; >>>>> } >>>>> close stuff >>>>> work on results >>>>> >>>> Do you have any proof of this? >>>> What would be the logic here? >>>> Got some example benchmarks? >>>> Got a URL of a whitepaper or something that discusses this? >>>> >>>> I would think the only things that the mysql calls are doing is holding >>>> pointers to the next record and DB handle. I can't imagine they are >>>> using >>>> that much resources, nor would they be just spinning and waiting >>>> tying up >>>> CPU cycles either. >>>> >>>> >>>> I've never heard this to be the case. >>>> >>>> >>> I have proof to the contrary :-) >>> >>> <?php >>> echo number_format(memory_get_usage()) . " Bytes used before query\n"; >>> >>> $conn = mysql_connect('localhost', 'x', 'x'); >>> $result = mysql_query("SELECT * FROM table_name"); >>> $count = mysql_num_rows($result); >>> >>> echo $count . " rows returned\n"; >>> >>> echo number_format(memory_get_usage()) . " Bytes used after query\n"; >>> >>> while($rows[] = mysql_fetch_assoc($result)) { } >>> >>> echo number_format(memory_get_usage()) . " Bytes used after array >>> creation\n"; >>> >>> mysql_free_result($result); >>> >>> echo number_format(memory_get_usage()) . " Bytes used after >>> mysql_free_result\n"; >>> ?> >>> >>> 65,588 Bytes used before query >>> 6940 rows returned >>> 67,264 Bytes used after query >>> 6,376,612 Bytes used after array creation >>> 6,376,756 Bytes used after mysql_free_result >>> >>> >> yup appears i was talking bollocks :D lolol - confusion came on my part >> from optimisation of a high traffic site over the past few years! >> >> correction: >> >> on simple scripts it is slower, marginally, (due to two for loops and >> more data in memory) >> >> however - in real world applications where such nasty practises as >> escaping in and out of html to render results and secondary per row >> queries or api calls are taken in to account this method of getting >> everything out of the db in a single swift action gives the performance >> boost - and more over on high traffic sites freeing up the connection / >> query resources as quickly as possible makes a rather noticeable impact, >> too many open connections and queries can be very nasty when your >> getting a few hundred / thousand requests per second. >> >> sorry 'bout that! >> >> > > -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php