Re: List of Known Issues for a particular release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> test_expect_failure (as well as other helper functions in the test 
> harness library, such as test_expect_success, test_must_fail, 
> test_might_fail, etc.) are explained in t/README.  By its definition, 
> it technically satisfies "list of known issues" as you asked for.
> However, most software products that publish a list of known issues 
> has probably curated problems that users are likely to see or be 
> curious about, and which they want to inform users of both to reduce
>  support load and help users avoid problems.

Perfect, that answers my question. 

> This list is not curated in any such way.  It's just a list of issues 
> developers thought to document for themselves and/or other developers.
> It is thus way different than what you might want:
> 
> (1) There is evidence that some have used it for "In an ideal world, 
> this thing should support this feature too in which case I'd expect it 
> to behave a certain way that it doesn't yet."  The line between 
> feature (what works is fine but we could make it better) and bug (it's 
> not really correct if it doesn't do it this way) gets really blurry at 
> times, and you'd pick a much different tradeoff in communication 
> between developers than you would in communication from developers to 
> users; with other developers you spend a lot more time talking about 
> internals and goals and direction we'd like to move the software in.
> 
> (2) Also, some of these "known breakages" could be in corner cases 
> that are very unlikely to be hit be users, and perhaps not only likely 
> to be hit by individual users, but unlikely that anyone anywhere will 
> ever hit that error (some of the merge recursive tests I added might 
> fall into that category).
> 
> (3) There may also be cases where someone once thought that optimal 
> behavior would be a little different and that they were planning to 
> implement more features, and then later changed their mind but forgot 
> to clean up the testcases.
> 
> (4) ...and that's just a few off the top of my head.  I'm sure the 
> list has several other things that make it not quite match what you 
> want.

Thanks for the detailed clarification. This helps a lot. It may require a
bit of manual work sifting through these to see which could potentially
affect our use cases (very very unlikely any will, but that’s the due diligence
that is required for functional safety).

> As such, Brian's answer to your question elsewhere in this thread is 
> probably better than mine, but if by chance you are just being forced 
> to go through a box checking exercise and there's no reason for 
> needing these results other than that someone asked that they be 
> provided (I sometimes had to go through such exercises when I worked 
> at Sandia National Labs years ago), then technically the command I 
> gave you could be used to satisfy it.

Thanks Brian for your thoughts. We don't use much open source software
for our functional safety development (for this reason), so this is new 
territory for us. I think Elijah's information will get me most of the way there.

Also, I pulled down the source for the release version we are using and ran 
the test suite on it to have results that confirm that the software is working 
according to the design, so that is really helpful to have as well. 

Cheers,
Mark




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux