Hi Darren, On Mon, 2009-08-03 at 16:30 -0700, Darren Hart wrote: > The current ltp/testcases/realtime tests belong to one of func, perf, or > stress. While strict pass/fail criteria make sense for functional tests > (did the tasks wake up in priority order?), the others use "arbitrary" > values and compare those against the whatever is being measured (wakeup > latency, etc.) and then determine pass/fail. Ideally the tests > themselves would not determine the pass/fail criteria, and would instead > simply report on their measurements since the criteria will vary in > every use-case based on requirements, workload, hardware, etc. > > I'd like to propose an approach where the tests only report their > measured values (with the exception of the func/* tests which will > maintain their pass/fail criteria). Users should be able to populate a > criteria.conf file that specified the criteria of each test. The > results could then be parsed, compared against the results, and a > pass/fail determined from there. I suspect it would be best for the .c > tests to just report the numbers and the statistics in a common format > and rely on python parser scripts to read the config file and determine > pass/fail from there. > > I'd like users thoughts on this approach before we jump in and start > changing things (as this is a fairly invasive change). This is indeed a good approach. Should we also ask the RT-USERS, who might be interested to comment on this ? Regards-- Subrata > -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html