On Fri, Oct 28, 2011 at 03:42:48PM -0400, Eric Butera wrote: > On Fri, Oct 28, 2011 at 3:29 PM, Daniel Brown <danbrown@xxxxxxx> wrote: > > On Fri, Oct 28, 2011 at 13:25, Jim Long <php@xxxxxxxxxxxxx> wrote: > >> > >> Eric: > >> > >> Thanks for your reply. > >> > >> "process row here" is a comment. ??It doesn't do anything. ??The > >> script, exactly as shown, runs out of memory, exactly as shown. > > > > ?? ??My response presumes that you're planning on placing something > > into this comment area, in which the memory will only further > > increase. ??If *presumed* should be replaced by *ASSumed* in this case, > > skip mysql_unbuffered_query() and go straight for mysql_num_rows(). > > Do not pass GO. ??Do not collect $200. > > > > -- > > </Daniel P. Brown> > > Network Infrastructure Manager > > http://www.php.net/ > > > > I was glad to learn what comments were. Eric: Please forgive me if I was curt in my message to you. I don't mean to bite the hands that are trying to help. As Daniel rightly observed, my concern is just that if a pretty much empty while loop runs out of memory when trying to step through each record, I'm really going to be hosed if I try to start doing some productive work within the while loop. Daniel's memory trend analysis is helpful. I'm testing from the command line, so there's no Apache overhead, but the memory usage starts as: 1 12145496 2 12145976 3 12146408 4 12146804 5 12147200 6 12147596 ... I normally prefer to work in PostgreSQL, but the client has already gone down the MySQL road. Just for edification's sake, I exported the table in PostgreSQL and re-worked my code: if (!($db_conn = pg_connect( "host=$db_host user=$db_user dbname=$db_name password=$db_pwd" ))) die( "Can't connect to SQL server\n" ); $qry = "select * from test_table order by contract"; if ($result = pg_query( $db_conn, $qry )) { $n = 0; while ($row = pg_fetch_assoc( $result )) { unset( $row ); $n++; echo sprintf( "%7d %12d\n", $n, memory_get_peak_usage() ); } // while pg_free_result($result); echo "$n\n"; } else { die( pg_last_error() . "\n" ); } Using PostgreSQL (on a completely different machine), this runs to completion, and memory consumption is nearly flat: 1 329412 2 329724 3 329796 4 329796 5 329796 ... 295283 329860 295284 329860 295285 329860 295286 329860 295287 329860 295287 If one were to describe the memory consumption as a 'leak', then PostgreSQL is leaking at a much slower rate than MySQL. Postgres leaks as much over the entire run (329860-329412=448) as MySQL does on each row. Put another way, the MySQL version leaks memory almost 300,000 times faster. My PostgreSQL machine also has MySQL installed, so I ran the MySQL version of the code on that machine for testing, a second opinion if you like. It leaked memory almost as bad as my client's PHP/MySQL installation, but a little more slowly, 396 bytes or so per row. The slower memory consumption enabled the code to run to completion, barely: 1 12149492 2 12149972 3 12150404 4 12150800 ... 295284 129087704 295285 129088100 295286 129088496 295287 129088892 295287 So is this just a difference in the programming quality of the database extensions for MySQL vs. PostgreSQL that one gobbles up memory profusely, while the other one has only a slight, slow leak? I will try experimenting with Daniel's idea of unbuffered queries, but my understanding is that while an unbuffered result resource is in use, no other SQL transactions can be conducted. Maybe I can get around that by using one MySQL connection for the unbuffered query, and another separate MySQL connection for the incidental SQL queries that I need to perform as I process each record from the large dataset. Jim -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php