At 17:50 +0200 2/7/2006, urgrue wrote:
rsync certainly helps with large directories, so thats at least a
partial solution, but it doesnt resume an interrupted transfer of an
individual file, does it?
rsync compares file size and timestamp(s?) and if they are the same
it presumes the contents are the same. If they differ it then reads
the file and generates checksums for each chunk of the file, compares
the checksums (much faster network load than sending the data) and
only updates the chunks that don't have a matching checksum. This
implies that it does not resend (or rewrite) the bulk of the file
that's consistent, but that it has to read all of both the source and
destination file. If you're rsync'ing over a network the performance
is reasonably close to resuming where it left off but it doesn't
depend on maintaining the state of an aborted copy.
Note: Options exist to override or tune several of the details above,
but I believe what I described is essentially the normal rsync behavior.
--
Jeff Woods <kazrak+kernel@xxxxxxxxxxx>
-
: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html