On Thu, Dec 3, 2015 at 2:10 AM, Nicholas Mc Guire <der.herr@xxxxxxx> wrote: > On Wed, Dec 02, 2015 at 04:36:50PM -0800, Greg KH wrote: >> On Wed, Dec 02, 2015 at 05:50:30PM -0600, Victor Rodriguez wrote: >> > On Tue, Dec 1, 2015 at 7:32 PM, Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: >> > > On Tue, Dec 01, 2015 at 06:45:51PM -0600, Victor Rodriguez wrote: >> > >> Hi >> > >> >> > >> Despite the fact that this is not a well formulated question. I wonder >> > >> what tests could be a good subset to measure the performance of the >> > >> kernel . I have some approaches like phoronix does here : >> > >> >> > >> http://www.phoronix.com/scan.php?page=article&item=linux-41-byt&num=1 >> > >> >> > >> I am sure postmark/ John the ripper/ Apache are good candidates but I >> > >> want to ask the community if there is some specific test that you >> > >> recommend >> > > >> > > It depends on what you want to test, specifically. The "kernel" isn't a >> > > very specific thing, what most of those tests test is the speed of the >> > > hardware, not specifically the kernel itself. >> > > >> > > good luck, >> > > >> > > greg k-h >> > >> > Thanks for the feedback . You are right they test the speed of the HW >> > however I have seen that when there is a change in the kernel for >> > network the performance of apache is changed, which make total sense . >> >> Maybe, maybe not, depending on if "apache" is cpu or hardware bound >> (networking hardware has physical limits...) again, you have to be very >> sure about exactly what you are wanting to test before using such a test >> to try to "validate" anything other than just raw hardware speed. >> >> Take a look at the "old" lmbench set of benchmarks for valid things that >> a kernel change can affect, it's much different from what you might be >> thinking of as a test. >> > We also still use lmbench as the usual first level of assessment as > it gives a lot of information about the change set impact on low-level > functions (system-calls, IPC, allocation...) was. It is much more precise > than trying to detect changes in complex applications that might only be making > a handful of a affected system call and thus look like > performance did not change while it actually did - just its in some > hard to reach corner case. > > As with all testing - you need layers of testing to get a usable > picture of what is going on and lmbench is a good candidate for the > lowest level. Deducing system level changes from looking at complex > application performance changes is alost impossible. > > Specifically lmbench has a simple make results; make rerun which can give > a good overview of differences - but actually the tests default runs are > only a small part of what the tests can uncover so looking at individual > microbenchmarks to discover latency/bandwidth changes can be very helpful > also to uncover odd hardware behavior. > > Some other low-level benchmarks we use are: > rt-tests - scheduling, pi > NetPIPE - network bandwidth > bonnie++ - filesystem Thanks a lot hofrat I really appreciate all the help I think that is time to turn my eyes to lmbench for sure as well to the tools you mention :) Yes a lot of layers are necessary to measure the QA of an OS , we need full image test as well as cloud tests ( since our OS is designed for Cloud ) . lmbench will be amazing for low level I really appreciate all the help > thx! > hofrat > _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies