On 2014/10/19 08:01, Keith Keller wrote:
On 2014-10-19, Tim Dunphy <bluethundr@xxxxxxxxx> wrote:
... and remember to use tcp for nfs transfer ;)
Hmm you mean specify tcp for rsync? I thought that's default.
No, he means use TCP for NFS (which is also the default).
I suspect that sshfs's relatively poor performance is having an impact
on your transfer. I have a 30TB filesystem which I rsync over an
OpenVPN link, and building the file list doesn't take that long (maybe
an hour?). (The links themselves are reasonably fast; if yours are not
that would have a negative impact too.)
If you have the space on the jump host, it may end up being faster to
rsync over ssh (not using NFS or sshfs) from node 1 to the jump host,
then from the jump host to node 2.
--keith
Another option that might help is to break the transfer up into smaller
pieces. We have a 3TB filesystem that has a lot of small data files in
some of the subdirectories and it used to take a long time (close to an
hour) and impacted fs performance to build the file list. But, since the
volume mount point has only directories beneath it, we were able to
tweak our rsync script to iterate over the subdirectories as individual
rsyncs. Not only did that isolate the specific directories with the
large number of files to their own rsync instances but an added bonus of
this is that if for some reason there is an error in a given rsync
attempt, the script is written to pick up at the same area and try again
(a couple times) and does not then need to restart the entire filesystem
rsync.
Hope this helps!
Miranda
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos