Re: [PATCH v5 0/9] Introduce clar testing framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick

On 20/08/2024 13:59, Patrick Steinhardt wrote:
On Fri, Aug 16, 2024 at 02:37:34PM +0100, Phillip Wood wrote:
Hi Patrick

On 16/08/2024 08:04, Patrick Steinhardt wrote:

  - As I think you've pointed out elsewhere there are no equivalents
    for check_int(a, <|<=|>|>=, b) so we're forced to use cl_assert()
    and forego the better diagnostic messages that come from a
    dedicated comparison macro. We should fix this as a priority.

Agreed, this one also feels rather limiting to me. Are you okay with me
doing this as a follow-up in case this series lands?

Yes

  - cl_assert_equal_i() casts its arguments to int whereas check_int()
    and check_uint() are careful to avoid truncation and keep the
    original signedness (if that's a word). I think that's unlikely to
    be a problem with our current test but could trip us up in the
    future.

Yeah. If it ever becomes a problem we can likely just introduce
something like `cl_assert_equal_u()` to achieve the same for unsigned.
Both should probably end up casting to `intmax_t` and `uintmax_t`,
respectively.

Supporting wider arguments would make sense. At the moment clar__assert_equal() does not support PRIiMAX, only the non-standard PRIuZ.

  - cl_assert_equal_s() prints each argument as-is. This means
    that it passes NULL arguments through to snprintf() which is
    undefined according to the C standard. Compare this to check_str()
    that is NULL safe and is careful to escape control characters and
    add delimiters to the beginning and end of the string to make it
    obvious when a string contains leading or trailing whitespace.

Good point indeed, and something I'm happy to fix upstream.

That's great

  - The cl_assert_equal_?() macros lack type safety for the arguments
    being compared as they are wrappers around a variadic function.
    That could be fixed by having each macros wrap a dedicated
    function that wraps clar__fail().

Some of them do indeed, others generate issues. I don't think we have to
have dedicated functions, but could do something about this with
`__attribute__((format (printf, ...)))`.

I wondered about suggesting '__attribute__((format (printf, ...)))' but we'd need to double up the format argument in order to use it which is kind of messy. At the moment we pass "%i" with two integers.

  - There is no equivalent of test_todo() to mark assertions that are
    expected to fail. We're not using that yet in our tests but our
    experience with the integration tests suggests that we are likely
    to want this in the future.

Heh, funny that you mention this. I had this discussion some 6 years ago
I think, where I also mentioned that this should exist as a feature. In
any case, I agree.

Excellent!

  - To me the "sandbox" feature is mis-named as it does not provide any
    confinement. It is instead a useful mechanism for running a test in
    a temporary directory created from a template.

I guess we'll either just have to not use it or ignore that it's named a
bit awkwardly. Changing this in clar probably wouldn't work well because
other projects may depend on it.

Yes it's probably too late to rename it. I think being able to create a test directory from a template directory could be useful, we just need to be mindful that the test code is not confined by a sandbox.

  - There are no checks for failing memory allocations - the return
    value of calloc() and strdup() are used without checking for NULL.

I'll commit to fixing this upstream if this lands.

Great

  - The use of longjmp is a bit of a double edged sword as it makes it
    easy to leak resources on test failures.

I have to say that this is one of the best features of the clar to me.
The current test framework we use doesn't, which in theory requires you
to always `return` whenever there was an error. But that results in code
that is both awful to read and write, so for most of the tests simply
don't bother at all. And consequently, the tests are quite likely to
cause segfaults once one of the checks fails because we didn't abort
running the testcase, but things are broken.

I thought that the tests took care to bail out early where it made sense. Sometimes it is useful to continue for example when checking an strbuf we might want to check alloc and len before bailing out. We're probably not losing much by not doing that though.

In practice, I'd claim that you don't typically care all that much about
memory leaks once your basic assertions start to fail.

I tend to agree. I was thinking more about exhausting fds and cleaning up files but that's probably not a big issue in practice.

So, things that need addressing and that I'm happy to do as follow ups:

   - Introduce functions that compare integers.

   - Improve type safety of the `cl_assert_equal_?()` macros.

   - Adapt `cl_assert_equal_s()` to handle NULL pointers.

   - Introduce checks for failing memory allocations.

Nice to have would be support for known-failing tests.

This all sounds good to me

Sorry for missing this mail earlier.

Phillip

Patrick




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux