is the change in meaning of cpu related metrics taken into account in other kernel tools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hullo

many computer measurements assume a fixed performance cpu (eg top, ps, sar). As far as I can see, not only are the domain models of these tools being updated, but I cannot even find out what their existing measurements mean in the face of a varying cpufreq.

Can anyone point me at anything that will help me understand how variable cpufreq impacts the metrics, other than reverse engineering the source code and experimenting with the binaries?

A simple example, if I'm trying to work out how much spare capacity I've got on a compute server, I need to know how much I'm using now. If I start with, say, top, I can see that process x is consuming 25% of the cpu and that 30% of the time the server's in a wait state. What's not obvious is what these are percentages of. It's not even all that clear that they are consistent between the total values and the individual values for each process. To make a robust process, I'm going to want to use something that collects continuous samples, but I'm not sure that sar collects the relevant cpufreq for each sample period (nor whether such a cpufrequency can be measured or estimated).

virtualisation complicates the picture further, as cpufreq can be considered more of a continuous attribute, rather than discrete.

Any thoughts on where I can get some baseline information on how these different perspectives hang together?

tia
Tim Coote



--
To unsubscribe from this list: send the line "unsubscribe cpufreq" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Devel]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Forum]     [Linux SCSI]

  Powered by Linux