Re: Fail Transfer of Large Files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 11/21/10 9:02 AM, Michael D. Berger wrote:
> On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
>
>> On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
>> <m_d_berger_1900@xxxxxxxxx>  wrote:
> [...]
>>
>> > From decades of experience in many environments, I can tell you that
>> reliable transfer of large files with protocols that require
>> uninterrupted transfer is awkward. The larger the file, the larger the
>> chance that any interruption at any point between the repository and the
>> client will break things, and with a lot of ISP's over-subscribing their
>> available bandwidth, such large transfers are, by their nature,
>> unreliable.
>>
>> Consider fragmenting the large file: Bittorrent transfers do this
>> automatically: the old "shar" and "split" tools also work well, and
>> tools like "rsync" and the lftp "mirror" utility are very good at
>> mirroring directories of such split up contents quite efficiently.
>
> What, then, is the largest file size that you would consider
> appropriate?

There's no particular limit with rsync since if you use the -P option it will be 
able to restart a failed transfer with just a little extra time to verify it 
with a block-checksum transfer.  With methods that don't restart, an appropriate 
size would depend on the reliability and speed of the connections since it 
relates to the odds of a connection problem during the time it takes to complete 
the transfer.

-- 
   Les Mikesell
    lesmikesell@xxxxxxxxx
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux