On 7/9/12, Les Mikesell <lesmikesell@xxxxxxxxx> wrote: > One thing that helps is to break it up into separate runs, at least > per-filesystem and perhaps some of the larger subdirectories. > Depending on the circumstances, you might be able to do an initial run > ahead of time when speed doesn't matter so much, then just before the > cutover shut down the services that will be changing files and > databases and do a final rsync which will go much faster. I did try this but the time taken is pretty similar in the main delay is the part where rsync goes through all the files and spend a few hours trying to figure out what needs to be the updated on the second run after I shutdown the services. In hindsight, I might had been able to speed up things up considerably if I had generated a file list based on last modified time and passed it to rsync via the exclude/include parameters. > Also, have you looked at clonezilla and ReaR? Yes, but due to time constraints, I figured it was safer to go with something simpler that I didn't have to learn as I go and could be done live without needed extra hardware on site. Plus it would be something that works at any site I needed it without extra software too. _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos