Re: [PATCH v7 04/10] lib/test_linear_ranges: add a test for the 'linear_ranges'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 1, 2020 at 1:45 AM Vaittinen, Matti
<Matti.Vaittinen@xxxxxxxxxxxxxxxxx> wrote:
>
> Hello Brendan,
>
> Thanks for taking a look at this :) Much appreciated! I have always
> admired you guys who have the patience to do all the reviewing... It's
> definitely not my favourite job :/

Huh, you know, I thought the same thing like 3 years ago. I guess it
got the point where I had to do reviews for the things I maintained
that I got used to it. Then I got to a point where I was requesting so
many reviews from others that I felt that I owed the community
reviews. So no thanks necessary, I feel that I am just paying it
forward. :-)

> On Tue, 2020-03-31 at 11:08 -0700, Brendan Higgins wrote:
> > On Tue, Mar 31, 2020 at 5:23 AM Matti Vaittinen
> > <matti.vaittinen@xxxxxxxxxxxxxxxxx> wrote:
> > >     Add a KUnit test for the linear_ranges helper.
> > >
> > > Signed-off-by: Matti Vaittinen <matti.vaittinen@xxxxxxxxxxxxxxxxx>
> >
> > One minor nit, other than that:
> >
> > Reviewed-by: Brendan Higgins <brendanhiggins@xxxxxxxxxx>
> >
> > > ---
> > >
>
> /// Snip
>
> > > +
> > > +/* First things first. I deeply dislike unit-tests. I have seen
> > > all the hell
> > > + * breaking loose when people who think the unit tests are "the
> > > silver bullet"
> > > + * to kill bugs get to decide how a company should implement
> > > testing strategy...
> > > + *
> > > + * Believe me, it may get _really_ ridiculous. It is tempting to
> > > think that
> > > + * walking through all the possible execution branches will nail
> > > down 100% of
> > > + * bugs. This may lead to ideas about demands to get certain % of
> > > "test
> > > + * coverage" - measured as line coverage. And that is one of the
> > > worst things
> > > + * you can do.
> > > + *
> > > + * Ask people to provide line coverage and they do. I've seen
> > > clever tools
> > > + * which generate test cases to test the existing functions - and
> > > by default
> > > + * these tools expect code to be correct and just generate checks
> > > which are
> > > + * passing when ran against current code-base. Run this generator
> > > and you'll get
> > > + * tests that do not test code is correct but just verify nothing
> > > changes.
> > > + * Problem is that testing working code is pointless. And if it is
> > > not
> > > + * working, your test must not assume it is working. You won't
> > > catch any bugs
> > > + * by such tests. What you can do is to generate a huge amount of
> > > tests.
> > > + * Especially if you were are asked to proivde 100% line-coverage
> > > x_x. So what
> > > + * does these tests - which are not finding any bugs now - do?
> >
> > I don't entirely disagree. I have worked on projects that do testing
> > well where it actually makes development faster, and I have worked on
> > projects that do testing poorly where it never improves code quality
> > and is just an encumbrance, and I have never seen a project get to
> > 100% coverage (nor would I want to).
> >
> > Do you feel differently about incremental coverage vs. absolute
> > coverage? I have found incremental coverage to be a lot more valuable
> > in my experiences.
>
> I think I have only been dealing with projects measuring absolute
> coverage. I think seeing a coverage as %-number is mostly not
> interesting to me. What I think could be interesting is showing the
> code-paths test has walked through. I believe that code spots that
> should be tested should be hand picked by a human. When we look at any
> %-number, we do not know what kind of code the test has tested.

Ah, okay, code coverage by functions called is a thing and GCOV + LCOV
for the Linux kernel can actually give these nice reports that show
the code paths that have been executed. It requires a bit of manual
review, but I have found it pretty handy. Let me try to find you an
example...

> > You seem pretty passionate about this. Would you like to be included
> > in our unit testing discussions in the future?
>
> I think it would be nice :) I don't expect I will be active talker
> there but I really like to know what direction things are proceeding in
> general. And who knows, maybe I will have a word to say at times :) So
> please, include me if it is not a big thing for you.

Absolutely! Would you be interested in joining our mailing list:

https://groups.google.com/g/kunit-dev

> //Snip
>
> > > +
> > > +static void range_test_get_value(struct kunit *test)
> > > +{
> > > +       int ret, i;
> > > +       unsigned int sel, val;
> > > +
> > > +       for (i = 0; i < RANGE1_NUM_VALS; i++) {
> > > +               sel = range1_sels[i];
> > > +               ret = linear_range_get_value_array(&testr[0], 2,
> > > sel, &val);
> > > +               KUNIT_EXPECT_EQ(test, 0, ret);
> >
> > nit: It looks like the next line might crash if this expectation
> > fails. If this is the case, you might want to use a KUNIT_ASSERT_*
> > here.
>
> Huh. I re-read this and almost agreed with you. Then I re-re-read this
> and disagreed. Perhaps we should write an unit-test to test this ;)
>
> The range1_sels and range1_vals arrays should always be of same size.
> Thus the crash should not occur here. If RANGE1_NUM_VALS was bad then
> we would get the crash already at
>
> > > +               sel = range1_sels[i];
>
> The linear_range_get_value_array() may return non zero value if value
> contained in range1_sels[i] is not in the range - but range1_vals[i]
> should still be valid memory.

Got it. Sorry, I just assumed the second check was invalid if the
first one was invalid.

All looks good to me then!



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux