Re: rsync question: building list taking forever

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Sun, Oct 19, 2014 at 9:05 PM, Tim Dunphy <bluethundr@xxxxxxxxx> wrote:
>>
>> > Don't forget that the time taken to build the file list is a function of
>> > the number of files present, and not their size. If you have many
>> millions
>> > of small files, it will indeed take a very long time. Over sshfs with
>> > a slowish link, it could be days.
>> >
>> > ....and it may end up failing silently or noisily anyway.
>
>
> Ahhh, but isn't that part of the beauty of adventure that being a linux
> admin is all about? *twitch*

There's not that much magic involved.  The time it takes rsync to read
a directory tree to get started should approximate something like
'find /path -links +0"   (i.e. something that has to read the
directory tree and the associated inodes).    Pre-3.0 rsync versions
read and transfer the whole thing before starting the comparison and
might trigger swapping if you are low on RAM.

So, you probably want the 'source' side of the transfer to be local
for faster startup. But... in what universe is NFS mounting across
data centers considered more secure than ssh?  Or even a reasonable
thing to do?  How about a VPN between the two hosts?

-- 
   Les Mikesell
     lesmikesell@xxxxxxxxx
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux