Re: Any way to detect performance in a test case?

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Wed, Jan 16, 2019 at 12:47:21PM +0800, Qu Wenruo wrote:
> 
> 
> On 2019/1/16 上午11:57, Dave Chinner wrote:
> > On Wed, Jan 16, 2019 at 09:59:40AM +0800, Qu Wenruo wrote:
> >> Hi,
> >>
> >> Is there any way to detect (huge) performance regression in a test case?
> >>
> >> By huge performance regression, I mean some operation takes from less
> >> than 10s to around 400s.
> >>
> >> There is existing runtime accounting, but we can't do it inside a test
> >> case (or can we?)
> >>
> >> So is there any way to detect huge performance regression in a test case?
> > 
> > Just run your normal performance monitoring tools while the test is
> > running to see what has changed. Is it IO, memory, CPU, lock
> > contention or somethign else that is the problem?  pcp, strace, top,
> > iostat, perf, etc all work just fine for finding perf regressions
> > reported by test cases...
> 
> Sorry for the misunderstanding.
> 
> I mean if it's possible for a test case to just fail when hitting some
> big performance regression.

This is part of the reported information in $RESULT_BASE/check.time.

If you want to keep a history of runtimes for later comparison, then
you just need to archive contents of that file with the test
results.

OR, alternatively, generate an XML test report which reports the
individual test runtime in each report:

.....
        <testcase classname="xfstests.xfs" name="generic/036" time="12">
        </testcase>
        <testcase classname="xfstests.xfs" name="generic/112" time="5">
        </testcase>
        <testcase classname="xfstests.xfs" name="generic/113" time="4">
        </testcase>
        <testcase classname="xfstests.xfs" name="generic/114" time="1">
.....

And then post-process these reports to determine runtime
differences.

> E.g. one operation should finish in 30s, but when it takes over 300s,
> it's definitely a big regression.
> 
> But considering how many different hardware/VM the test may be run on,
> I'm not really confident if this is possible.

You can really only determine performance regressions by comparing
test runtime on kernels with the same features set run on the same
hardware. Hence you'll need to keep archives from all your test
machiens and configs and only compare between matching
configurations.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux