On Mon, Feb 7, 2011 at 09:01, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote: > Justin Piszcz put forth on 2/6/2011 4:16 AM: > >> Workflow process- >> >> Migrate data from old/legacy RAID sets to new ones, possibly also 2TB->3TB, so >> the faster the transfer speed, the better. > > This type of data migration is probably going to include many many files of > various sizes from small to large. You have optimized your system performance > only for individual large file xfers. Thus, when you go to copy directories > containing hundreds or thousands of files of various sizes, you will likely see > much lower throughput using a single copy stream. Thus if you want to keep that > 10 GbE pipe full, you'll likely need to run multiple copies in parallel, one per > large parent directory. Or, run a single copy from say, 10 legacy systems to > one new system simultaneously, etc. > > Given this situation, you may want to consider tar'ing up entire directories > with gz or bz compression, if you have enough free space on the legacy machines, > and copying the tarballs to the new system. This will maximize your throughput, > although I don't know if it will decrease your total work flow completion time, > which should really be your overall goal. Another option might be to use tar and gzip to bundle the data up, then pipe it through netcat or ssh. When I have to transfer large chunks of data I find this is the fastest method. That said, if the connection is interrupted, then you're on your own. rsync might also be a good option. Thanks, -- Julian Calaby Email: julian.calaby@xxxxxxxxx Profile: http://www.google.com/profiles/julian.calaby/ .Plan: http://sites.google.com/site/juliancalaby/ -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html