Howdy Guys, I have a PHP backup script running as a cronjob that has an exec(mysqldump) line followed a bit later by an exec(tar -cjf) line. The backup script runs against a slave db in the wee hours, and the master gets a continuous stream of inputs at the rate of about 18,720 per day. Obviously as the database grows in size, the script takes longer and longer to run. Right now, after about 15 months of operation, it's taking about 40 minutes to complete. But I did a test yesterday, and the mysqldump part only takes about 2 minutes to finish. Even though I've never lost data, my concern is that since master-slave syncing is blocked while the script runs, I just might lose something if there's a buffer overrun or something like that. So, Question 1 is: does mysqldump's connection to the slave db exist for the entire script execution time, or just for the length of time of its own execution? I imagine if I used mysql_connect() in the script that it would be for the entire length of the script execution, but mysqldump makes its own connection, so I'm just not sure about this. Question 2: Should I, just to be on the safe side, break out the mysqldump code into its own scipt, and run it, say, 30 minutes before the tar script? Thanks for your time, David