On Wed, Jan 27, 2010 at 12:05 PM, Ketil Froyn <ketil@xxxxxxxxxx> wrote: > I've also noticed some other things: > > 1. ping shows a ~1% packet loss (2 packets out of 120 missing). > 2. scp transfers start at about 360kb/s, and then fall, completing a > file at an average of about 85kb/s (680Kbit/s) > 3. running 2 scp in parallell, they also started faster and then fell, > ending at about 82kb/s each (~1.3Mbit/s total) > 4. running 4 scp in parallell, they got an average of about 78kb/s > each, and gave a total of ~2.4Mbit/s transferred > > So the network appears to be able to handle more data transferred than > a single transfer can handle, so I guess it should be able to tune > this somehow? Or do I just need to run many transfers in parallel? I don't understand much about this stuff, but could this be relevant? It's an excerpt from http://www.psc.edu/networking/projects/tcptune/ (and could be out of date with respect to the current versions, for all I know): --- For example secure shell and secure copy (ssh and scp) implement internal flow control using an application level mechanism that severely limits the amount of data in the network, greatly reducing the performance all but the shortest paths. PSC is now supporting a patch to ssh and scp that updates the application flow control window from the kernel buffer size. With this patch, the TCP tuning directions on this page can alleviate the dominant bottlenecks in scp. In most environments scp will run at full link rate or the CPU limit for the chosen encryption. --- You might try comparing scp rates with ftp or nc. UDP vs TCP would be interesting too. Is your txqueuelen (or qdisc limit size, if you're using traffic control) high enough for that latency? -- To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html