[Hotplug_sig] Re: ANN: lhcs-regression-0.3 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2005-07-13 at 10:26 -0700, Bryce Harrington wrote:
> On Wed, Jul 13, 2005 at 09:58:56AM -0700, Mary Edie Meredith wrote:
> > On Wed, 2005-07-13 at 09:38 -0700, Bryce Harrington wrote:
> > > On Wed, Jul 13, 2005 at 09:13:52AM -0700, Mark Delcambre wrote:
> > > > On Wednesday 13 July 2005 08:47 am, Mary Edie Meredith wrote:
> > > > > To make these test cases easy to implement and use, could you
> > > > > please make sure the scripts have comments that state the goal
> > > > > of the test case, if you haven't already?  Sorry, I should have
> > > > > thought of this when we went over the Psuedo code.
> > > 
> > > Yes, we should have them print out a couple sentences explaining the
> > > purpose.  Do you think we can just take the description that you wrote
> > > on your test case summary page?
> > 
> > Yes, the purpose from the test case summary page is good.  If that
> > does'nt match somehow, we need to correct the web page.
> > 
> > If you are exporting the number of loops, then you can print 
> > it once at the beginning right? Otherwise, that's a lot of
> > extra unnecessary output.
> 
> Yes, that's a good approach.  I think this should fit in well with how
> Mark's implemented the looping.
> 
> > Speaking of output, maybe we need a flag that lets you only report 
> > failures?  I'm torn on this one, because you might want to see where 
> > the failure fits in to other successes and if you don't capture it
> > on a long run, you have to rerun.  Of course you can always
> > turn the flag off... This isn't a big thing for someone to add, but
> > it would be nice if it was consistent across the scripts.
>  
> > Your thoughts?
> 
> Well, remember we can always just `grep FAIL` from the output.  ;-) In
> fact, that's standard procedure for processing the output from these
> tests.  ;-) This is why we've taken care to make sure the tests print
> out PASS/FAIL in a standardized way, so we can use grep and get the info
> needed to debug it (the name of the failed test, plus the error
> message).
Sure grep, I just picture running out of disk space for 
very long runs, but it sounds like the size is minor, so 
I'm game with this approach.
> 
> In general I feel that anything that we could do easily enough with
> standard unix tools, we should leave out of the test case scripts, and
> let them focus on just doing their specific test as well as they can.
> 
> > > I think the wrapper script should be pretty straightforward; it should
> > > just define the number of loops to run, then invoke all of the
> > > hotplug*.sh tests in the directory.  The little for loop I wrote
> > > yesterday could be used as a starting point.
> > Easy, I know, but an accompanying loop that has reasonable 
> > defaults lets someone run this quickly without very much
> > investigation.  That's one goal.  Second is to run it by adding
> > repetitions and background activity, but I doubt we can
> > package that -- document how to, certainly.
> 
> Agreed, sounds good.  Have you given more thought to test case 5 and the
> workload to use for it?  Maybe next week you and Mark can put your heads
> together on it.
Not yet, it's on my TODO list.  

We also need to think about the tools/utilities that should 
be checked for Test 6.  We know about top, sar, probably 
iostat (percentage  of  CPU utilization computations would 
be effected, for starters).  Suggestions welcome. 
> 
> Bryce
-- 


[Index of Archives]     [Linux Kernel]     [Linux DVB]     [Asterisk Internet PBX]     [DCCP]     [Netdev]     [X.org]     [Util Linux NG]     [Fedora Women]     [ALSA Devel]     [Linux USB]

  Powered by Linux