On Sat, Jun 07, 2003 at 02:10:32PM +0530, learner linux wrote: > I was just wondering, how correct will this be when it comes to the > issues of pre-emption (in user-space). All the calculations will > result in erroneous values. What is the rationale behind such an > implementation? Or am I missing out something? System administrators are well used to the notion that top and ps do not provide coherent snapshots of system state. They are used to the idea that processing continues as normal while running top or ps, and that the results are at best a stochastic guess of system performance issues. Ensuring that top or ps gives self-consistent snapshots of data is almost certainly not worth the problems. To do so would require STOPPING all other processes on the system long enough to perform the accounting, then restarting them all. This isn't horrid on a single CPU system, but on SMP systems, is a rather odious burden. Considering that some processes may not exist by the time the terminal emulator has displayed the results on screen, while other previously dormant processes may be awoken while the terminal emulator is drawing the results on screen, the end results really don't matter much to the sysadmin anyway.. The only downsides i've seen in real life are short-lived processes that don't last even a second (compiling comes to mind) and fork bombs, where the machine becomes unusable between two outputs of top, where one doesn't have much in the way of information pointing towards reasons why the machine halted... (rlimits can help the second problem, and process accounting can help the first problem.) Does this answer your questions? -- http://sardonix.org/
Attachment:
pgp00430.pgp
Description: PGP signature