I am purely pulling things out of a hat. If it is a complete necessity you keep this data in the database for some odd reason, I had a thought. Probably not a good one, just pure speculation since I am not very experienced with advanced file functions and not even sure this is correct or possible. Use php to read the data into a file using the entry's ID as part of the file name, which is stored in an array, and save the files locally. One file per entry. Upload the files as BINARY to the new server. Use the stored array along with the php file functions to open/read/place in variable, use the file name to get the correct identifier and read the data back into the new database using an update query. The update process would need to be done for each individually, i.e.-one file at a time, with the variable NULLED after each successful read/update. Alternately, you could use the original images and read them back into the database using a loop to iterate through the image directory. Or, as was (I think) suggested, break your backup into smaller pieces. Or, once again, a purely personal opinion: In the amount of time it has taken to discuss this and look for a solution, the database could have been corrected to use file identifiers instead of storing the actual images, and the images zipped/uploaded/unzipped to the new server. A few benefits to this are: You won't encounter this problem in the future; If the database ever crashes or gets hacked, you won't lose everything; Fewer problems making/restoring backups; Smaller database with less overhead; Can be indexed for more efficiency; Less overall processing is required; Editing/replacing an image doesn't require the database; Any scripts using these images require only minimal database interaction (i.e-javascripts for a gallery, slide show, menus, etc) Just a few thoughts.