Re: Any way to detect performance in a test case?

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Thu, Jan 17, 2019 at 09:30:19AM +0800, Qu Wenruo wrote:
> On 2019/1/17 上午8:16, Dave Chinner wrote:
> > On Wed, Jan 16, 2019 at 12:47:21PM +0800, Qu Wenruo wrote:
> >> E.g. one operation should finish in 30s, but when it takes over 300s,
> >> it's definitely a big regression.
> >>
> >> But considering how many different hardware/VM the test may be run on,
> >> I'm not really confident if this is possible.
> > 
> > You can really only determine performance regressions by comparing
> > test runtime on kernels with the same features set run on the same
> > hardware. Hence you'll need to keep archives from all your test
> > machiens and configs and only compare between matching
> > configurations.
> 
> Thanks, this matches my current understanding of how the testsuite works.
> 
> It looks like such regression detection can only be implemented outside
> of fstests.

That's pretty much by design. Analysis of multiple test run results
and post-processing them is really not something that the test
harness does. The test harness really just runs the tests and
records the results....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux