On Sat, Aug 20, 2011 at 07:36, john k <johnxsec@xxxxxxxxx> wrote: > What is the optimal way to measure the performance hit a change in the > kernel brings to the system under normal conditions? > Are there tools for this job or a standard procedure kernel developers > follow or everyone comes up with his own metrics? > I know it sounds vague and maybe the measurement must be related with where > the change is on the kernel code. > Say for example that I implemented some extra security checks in the > copy_from_user function, or some other critical code. Whats the best way to > measure the performance hit it causes? "Daily workload" such as kernel compile is one of the test, so IMO you already did one good test. However, since you are talking about "micro" change, the effect might be negligible to some extent. AFAIK, Linux Test Project (http://ltp.sourceforge.net/) is a (almost?) standard way to do Linux benchmark (be it kernel or user space). Most of linux benchmark is usually a composed form of integrated tools like IoZone, volanomark, interbench and so on. You might also need things like Linux Trace Toolkit (http://lttng.org/) that allows you to very detailed code coverage and code tracing, almost like gcov and gprof but with finer grained precision. -- regards, Mulyadi Santosa Freelance Linux trainer and consultant blog: the-hydra.blogspot.com training: mulyaditraining.blogspot.com _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies