Re: Parallel transfers with sftp (call for testing / advice)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

 



Did anything happen after https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers-fast/? I suspect it did, because we do now allow multiple outstanding packets, as well as specifying the buffer size.

Daniel explained the process that SFTP uses quite clearly, such that I'm not sure why re-assembly is an issue.  He explained that each transfer already specifies the offset within the file.  It seems reasonable that multiple writers would just each write to the same file at their various different offsets.  It relies on the target supporting sparse files, but supercomputers only ever run Linux ;-) which does do the right thing.

The original patch which we are discussing seemed more concerned about being able to connect to multiple IP addresses, rather than multiple connections between the same pair of machines.  The issue, as I understand, is that the supercomputer has slow NICs, so adding multiple NICs allows greater network bandwidth.  This, I think, is the problem to be solved; not re-assembly, just sending to what appear to be multiple different hosts (i.e. IP addresses.)

I was curious to know why a supercomputer would have issues receiving at some high-bandwidth via a single NIC, while the sending machine has no such performance issue; but that's an aside.

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@xxxxxxxxxxx
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev




[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux