Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023.06.29 12:42, Linus Arver wrote:
> Hello,
> 
> Josh Steadmon <steadmon@xxxxxxxxxx> writes:
> 
> > In our current testing environment, we spend a significant amount of
> > effort crafting end-to-end tests for error conditions that could easily
> > be captured by unit tests (or we simply forgo some hard-to-setup and
> > rare error conditions).Describe what we hope to accomplish by
> 
> I see a minor typo (no space before the word "Describe").

Thanks, fixed for V4.

> > +=== Comparison
> > +
> > +[format="csv",options="header",width="75%"]
> > +|=====
> > +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
> > +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@xxxxxxxxx/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
> > +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
> > +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
> > +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +|=====
> 
> This table is a little hard to read. Do you have your patch on GitHub or
> somewhere else where this table is rendered with HTML?

Yes, I've pushed a WIP of this to:
https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc

However, this doesn't render the color coding in the table, so you may
also want to just build it locally:
`make -C Documentation technical/unit-tests.html`

> It would help to explain each of the answers that are filled in
> with the word "Partial", to better understand why it is the case. I
> suspect this might get a little verbose, in which case I suggest just
> giving each framework its own heading.

Yeah that is coming in V4.

> The column names here are slightly different from the headings used
> under "Desired features"; I suggest making them the same.

Fixed for V4.

> Also, how about grouping some of these together? For example "Diagnostic
> output" and "Coverage reports" feel like they could be grouped under
> "Output formats". Here's one way to group these:
> 
>     1. Output formats
> 
>     TAP support
>     Diagnostic output
>     Coverage reports
> 
>     2. Cost of adoption
> 
>     Vendorable / ubiquitous
>     Maintainable / extensible
>     Major platform support
> 
>     3. Performance flexibility
> 
>     Parallel execution
>     Lazy test planning
>     Runtime-skippable tests
>     Scheduling / re-running
> 
>     4. Developer experience
> 
>     Mocks
>     Signal & exception handling

I didn't state it outright, but they're roughly but not perfectly
ordered by priority. Of course, other people may prioritize these
differently, and I'm not set on this ordering either. Grouping by
category does seem more useful.


> I can think of some other metrics to add to the comparison, namely:
> 
>     1. Age (how old is the framework)
>     2. Size in KLOC (thousands of lines of code)
>     3. Adoption rate (which notable C projects already use this framework?)
>     4. Project health (how active are its developers?)
> 
> I think for 3 and 4, we could probably mine some data out of GitHub
> itself.

Interesting, I'll see about adding some of these.


> Lastly it would be helpful if we can mark some of these categories as
> must-haves. For example would lack of "Major platform support" alone
> disqualify a test framework? This would help fill in the empty bits in
> the comparison table because we could skip looking too deeply into a
> framework if it fails to meet a must-have requirement.

Yeah, right now I think supporting TAP is the only non-negotiable one,
but I'll add a discussion about priorities.

> Thanks,
> Linus

Thanks for the review!



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux