Jonathan A. Zdziarski writes: > I was scp'ing a 2MB file to my home computer over a DSL line and just > happened to run top at the same time. I immediately noticed this line: > > 13864 root 1 30 0 2884K 1744K run 0:38 42.00% sshd2 > > It appears that scp'ing a file over a slow connection causes the process to > suck up a huge number of resources. There's most likely no usleep() > somewhere it's needed. A couple scp's over slow connections could severely > degrade the boxes performance. I've noticed this but always assumed it was just that encryption is a cpu-intensive operation, and that was how much CPU it took to encrypt the data at the rate it was being sent. Are you sure it's using more than it really needs? Unless sshd was written by a completely insane person, it will encrypt what data it has, then sleep in `read' or `select' waiting for more. In this case it will use as much cpu as it takes to keep up with the data. This seems perfectly reasonable to me. Encryption is a cpu-intensive operation, and you have to pay the cycles somehow; might as well take them when you need them. `usleep' wouldn't be the fix, either; that would just slow down the copying. I'm not sure how this is a security issue, anyway; if someone can scp files, then they can log in via ssh. So they could just run several dozen processes doing `for(;;);'. That will likely degrade performance a lot more. > This test was performed on a Solaris 8_x86 machine. How fast a machine? 38 seconds of cpu to encrypt 2 megs of data does seem a bit high for modern machines. But on an older machine, I'd believe it. -- Nate Eldredge neldredge@hmc.edu