Re: EXTERNAL: Re: List of Known Issues for a particular release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

On Thu, Jul 11, 2019 at 12:02 PM Mark T. Ortell
<mtortell@xxxxxxxxxxxxxxx> wrote:
>
> Elijah,
>
> Thanks for the response. I am not clear whether the test_expect_failure means that the test is trying to do something that should fail and so it is a valid test case or if it is a test case that is failing, but should succeed and has only been temporarily disabled until it is fixed. I'm guessing the former. In this case, if it successfully did whatever it were testing, that would be an issue. A simple example of this would be a test to try to login with an invalid username and password. That is expected to fail and if it passed, it would be an issue. If this is the case, then it doesn't look like it provides a list of issues. Please clarify what the test_expect_failure indicates.

Please don't top-post on this list.

test_expect_failure (as well as other helper functions in the test
harness library, such as test_expect_success, test_must_fail,
test_might_fail, etc.) are explained in t/README.  By its definition,
it technically satisfies "list of known issues" as you asked for.
However, most software products that publish a list of known issues
has probably curated problems that users are likely to see or be
curious about, and which they want to inform users of both to reduce
support load and help users avoid problems.

This list is not curated in any such way.  It's just a list of issues
developers thought to document for themselves and/or other developers.
It is thus way different than what you might want:

(1) There is evidence that some have used it for "In an ideal world,
this thing should support this feature too in which case I'd expect it
to behave a certain way that it doesn't yet."  The line between
feature (what works is fine but we could make it better) and bug (it's
not really correct if it doesn't do it this way) gets really blurry at
times, and you'd pick a much different tradeoff in communication
between developers than you would in communication from developers to
users; with other developers you spend a lot more time talking about
internals and goals and direction we'd like to move the software in.

(2) Also, some of these "known breakages" could be in corner cases
that are very unlikely to be hit be users, and perhaps not only likely
to be hit by individual users, but unlikely that anyone anywhere will
ever hit that error (some of the merge recursive tests I added might
fall into that category).

(3) There may also be cases where someone once thought that optimal
behavior would be a little different and that they were planning to
implement more features, and then later changed their mind but forgot
to clean up the testcases.

(4) ...and that's just a few off the top of my head.  I'm sure the
list has several other things that make it not quite match what you
want.

As such, Brian's answer to your question elsewhere in this thread is
probably better than mine, but if by chance you are just being forced
to go through a box checking exercise and there's no reason for
needing these results other than that someone asked that they be
provided (I sometimes had to go through such exercises when I worked
at Sandia National Labs years ago), then technically the command I
gave you could be used to satisfy it.

> Below is the output from the provided command:
<snip>

Not sure why you included this.

> Regards,
> Mark


Best Wishes,
Elijah




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux