Re: [RFC] [PATCH 0/18] xfstests: move tests out of top level

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dave,

On Thu, Aug 23, 2012 at 09:42:19AM +1000, Dave Chinner wrote:
> On Wed, Aug 22, 2012 at 02:16:42PM -0500, Ben Myers wrote:
> > On Wed, Aug 22, 2012 at 08:09:26AM +1000, Dave Chinner wrote:
> > > On Tue, Aug 21, 2012 at 11:33:37AM -0500, Ben Myers wrote:
> > > > On Tue, Aug 21, 2012 at 08:43:06AM +1000, Dave Chinner wrote:
> > > > > On Mon, Aug 20, 2012 at 04:27:25PM -0500, Mark Tinguely wrote:
> > > > > > On 07/26/12 04:27, Dave Chinner wrote:
> > > > > > >Alt-Subject: Games with Sed, Grep and Awk.
> > > > > > >
> > > > > > >This series is based on top of the large filesystem test series.
> > > > > > >
> > > > > > >This moves all the tests into a ./tests subdirectory, and sorts them into
> > > > > > >classes of related tests. Those are:
> > > > > > >
> > > > > > >	tests/generic:	valid for all filesystems
> > > > > > >	tests/shared:	valid for a limited number of filesystems
> > > > > > >	tests/xfs:	xfs specific tests
> > > > > > >	tests/btrfs	btrfs specific tests
> > > > > > >	tests/ext4	ext4 specific tests
> > > > > > >	tests/udf	udf specific tests
> > > > > > 
> > > > > > The SGI XFS group talked about your proposed changes to xfstests and
> > > > > > the response is very positive.
> > > > > > 
> > > > > > The couple concerns are:
> > > > > > 
> > > > > > 1) There is a consensus in the group that the benchmark framework
> > > > > >    should remain until there is a common benchmark available.
> > > > > > 
> > > > > >    Could the benchmark infrastructure be placed into its own directory
> > > > > >    until a new common benchmark framework has been adopted?
> > > > > 
> > > > > Keeping it just complicates things. The benchmark infrastructure
> > > > > is bitrotted and was largely just a hack tacked on to the side of
> > > > > the regression test suite.
> > > > > 
> > > > > For it to be useful in an automated test environment, it would need
> > > > > to be re-implemented from scratch with reliable recording of results
> > > > > and the ability to determine if a result is unusual or not. None of
> > > > > this exists - it's just a framework to run a couple of benchmarks
> > > > > and dump some output to stdout using the xfstests machine config
> > > > > files....
> > > > > 
> > > > > I have tried integrating other benchmarks into xfstests a while back
> > > > > (e.g. compile bench, fsmark, etc) and using the results for some
> > > > > kind of meaningful performance regression test. I rapidly came to
> > > > > the conclusion that the infrastructure was not up to scratch and
> > > > > that my simple handwritten standalone test scripts to iterate
> > > > > through benchmarks and capture results was much easier to use and
> > > > > modify than to jump through the weird bench infrastructure hoops.
> > > > > 
> > > > > So, no, I don't think it's worth keeping at all.
> > > > 
> > > > You've already made it clear that you feel the current bench implementation is
> > > > not worth keeping.  Once a suitable replacement for the bench infrastructure
> > > > has been implemented we can remove the old one.  Until then we prefer to keep
> > > > what we have in the tree.
> > > 
> > > That's not how the process works 
> > 
> > That is exactly how the process works.  You posted an RFC and Mark and the XFS
> > team at SGI walked through your patch set.  Mark subsequently posted the
> > commentary in reply to your RFC.  Cruft or not, the removal of a feature goes
> > through the same review process as everything else.
> 
> Sure, but you need to justify your arguments for keeping something
> with evidence and logic - handwaving about wanting something is, and
> always has been, insufficient justification. That's the part of the
> process I'm talking about - that statements of need require
> evidence, especially when you agreed to the removal at LSF in San
> Fransisco a few months ago. My arguments at the time were:
> 
> 	a) nobody is actually using it,
> 	b) it has effectively been unmaintained since 2003
> 	c) it has no regression analysis or detection capability
> 	d) it shares *very little* of xfstests
> 	e) it gets in the way of cleaning up xfstests
> 	f) there are far better workload generators that are being
> 	actively maintained.
> 
> And AFAIA, nothing has changed in the past few months.

"In this case, SGI would like to keep the benchmark capability in xfstests in
order have a better chance of catching performance regressions."  There has
been a been performance regression in the past few months (and there will be
more in the future), we have had performance regressions internally too, and
this has brought the value of having benchmarks in xfstests into sharp focus.

> > In this case, SGI would
> > like to keep the benchmark capability in xfstests in order have a better chance
> > of catching performance regressions.
> ....
> > > i.e. are you trying to tell us that SGI uses it internally without
> > > actually saying so? If so, why not just say that straight up? If
> > > not, then why is removing an unused hack a problem?
> 
> You haven't answered the most important question I've asked (i.e.
> provided any evidence that bench is valuable as it stands).
> You've skirted around answering this question, so I'm left to read
> between the lines and hearing this:
> 
> 	a) We think that we might use bench in the future,
> 	b) but we don't really know if bench is useful or not,
> 	c) and we don't really know what something useful might look
> 	like, either,
> 	d) so just move it out of the way into a subdir for now
> 
> It's all just indecisive handwaving with a (IMO) silly conclusion
> because you have no clear plan what you want to do for performance
> regression testing, except for the thought that "bench might be
> useful".
> 
> The thing is - all I've done is:
> 
> 	d) so just move it out of the way
> 
> It's in a git repository - deleted files are not lost! They are
> simply moved into revision history where they can still be accessed
> at will. i.e. the file(s), all their contents and history are still
> preserved, you just need to run a git command to get them back.
> 
> IMO, that's a far better solution that moving bitrotted code to
> another directory and just leaving it in an even more bitrotted (and
> now broken) state for people to trip over.
>
> > > As it is, I've been removing hacky functionality and then
> > > re-implementing it in a sane manner after everything has been moved
> > > about if it is still useful (e.g. expunged tests). The bench stuff
> > > is no different - I just don't see it as a useful addition to
> > > xfstests, so I haven't readded it.
> > > 
> > > Hence, if you really want to keep the bench functionality as viable,
> > > long term supported fucntionality, then send me patches that
> > > re-implement it cleanly on top of the patchset I posted. There's
> > > nothing stopping you from helping with this re-org of xfstests by
> > > contributing patches to provide the functionality you'd like to
> > > have....
> > 
> > Your point is well taken here.  SGI would like to keep the bench functionality
> > and we're prepared to spend time on keeping it viable.  Phil White has agreed
> > to take point on that effort.  The current thinking is to move the benchmark
> > code out of the way into a subdirectory first, and then set about improving (or
> > replacing) it.
> 
> OK, so moving it to revision history will be just fine until patches
> are written some time in the future to make it work again in a
> subdirectory.
> 
> But before anything major gets done with bench, there needs to be a
> coherent development plan produced.

Doesn't removing bench fall in to the category 'major'?  Did you develop a
coherent development plan on how to replace it with something better?

Great, lets do that then!

> What is it that you intend to do
> with bench? You made no mention of this at LSF, so I'm completely in
> the dark here about your intentions. If you are going to develop
> bench into a performance regression suite, then we first need a
> discussion (and document) cwonstating what it is and isn't going to
> do, how regressions are going to be detected, how the results are
> going to be stored for trend analysis and presentation, etc.
> 
> For example, the first feature on my list for any new performance
> regression test suite is PCP integration. If you can't monitor the
> system behaviour effectively during performance tests, then they are
> effectively useless for analysing the root cause of regressions. A
> lot of what I do is performance related, and modifcations live and
> die by what such monitoring shows of the system behaviour.
> Therefore, any new performance regression suite needs to generate
> PCP archives of system and XFS level behaviour....

PCP is awesome.  ;)

> Then, once we have an idea of what is going to be done, the white
> elephant can then be addressed: is xfstests the right place for this
> functionality?

I think it is the perfect place.  xfstests already has a wide following with
linux filesystems folks, so if we get bench cleaned up everyone will have
access to the same suite automatically.  I'd really like the focus to stay on
improving xfstests as opposed to some other suite, and I prefer not to be doing
SGI internal-only test suites for benchmarking and testing where possible.

> FWIW, this is the sort of reporting that a performance regression
> test suite should produce:
> 
> http://lists.linux.hp.com/~enw/ext4/3.2/

Yeah, that's really nice.  Do you happen to know what tool created it?

> Indeed, why start with bench when you can start with something far
> more advanced....

I understand that bench is bitrotted, but it still has some value even today.
Phil has agreed to take this on as a project so the bitrot will be addressed.
You have good points about needing a better plan in this area.  But we should
come up with a plan before taking the major step of removing benchmarking from
xfstests entirely.  That's not handwaving, it's good sense.  ;)

Lets stay focused on improving xfstests...

Regards,
Ben

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux