On Mon, Sep 3, 2018 at 6:18 PM, Cameron Simpson <cs@xxxxxxxxxx> wrote: > On 03Sep2018 13:07, Jakub Jelen <jjelen@xxxxxxxxxx> wrote: >> >> SCP protocol is really slow, especially on networks with high latency >> (wireless). The reason why is mostly the size of buffers, which is very >> small and SCP waits for every part to be confirmed by the remote host >> before sending another part. > > > This is categorically false. I've just read through the code to confirm. > > The only round trip stuff in scp is at the start and end of the file > transfer, when it sends the starting file permissions and checks receipt, > and at the very end. During the transfer it just chucks data at the TCP > connection as fast as it will accept it. Its internal buffer isn't > particularly large, but that is irrelevant because (a) the OS reads from > files in a sensible fashion and (b) TCP colesces writes into the same packet > if they arrive fast enough and there's room. > > So if you're transferring a lot of quite small files, the start/end file > transaction can get in the way. But large files go through pretty much at > the network speed (or the disc speed if the discs are slow and the network > is fast). > > In years of using scp, it has always been pretty fast. And rsync notably > slower for complete-file copies. > > I think Jakub has been misreading the code, probably the atomicio() > function, which _doesn't_ do an end-to-end delivery of the current buffer; > it is just a wrapper around the OS read/write call, which may return a short > result if its underlying buffer empties/fills. That is a _local_ buffer, > such as the TCP send buffer. > > Also, calling a home LAN wireless connection high latency is a bit special > purpose. It may be higher latency than your wired ethernet, but it is still > pretty low. > > By comparison, I just copied a decent sized file, using scp, over a > satellite link. Round trip packet time of 600ms-700ms best case. Throughput > was consistent with my ISP speeds: 5Mbps up, 25Mbps down. > > The network packet latency is _not_ a particular issue with scp, because it > doesn't do the per-buffer end-to-end checks Jakub imagines. > >> You can google "scp speed" and you will get a lot of answers, sometimes >> wrongly accusing the encryption or the compression, but really, the RTT >> and buffers are the fault as I write here: >> >> https://superuser.com/a/1101203/466930 > > > I read that. It's a about a paragraph of text with no discussion. > >> SCP should be really used only as fast hack for copying files in fast >> local networks. For all other cases, use SFTP or rsync if you need >> something more complex. > > > Really, no. Use whatever works best. Scp is fine for large files. For > incremental change, use rsync (which does a lot of checksum passing to skip > identical data areas) and for lots of small files use tar or cpio piped over > ssh, which removes another layer of round trip latency (the per-file sync). > > Cheers, > Cameron Simpson <cs@xxxxxxxxxx> > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Hey Cameron, Sounds like you and Jakub have differing analysis of the issue. Would you consider posting what you've written to Superuser as well.. to give others a shot at seeing what you've stated. Someone looking to google on the issue might not see what you posted in this thread. thanks! _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx