On Fri, Aug 16, 2024 at 02:37:34PM +0100, Phillip Wood wrote: > Hi Patrick > > On 16/08/2024 08:04, Patrick Steinhardt wrote: > > Hi, > > > > this is the fifth version of my patch series that introduces the clar > > testing framework for our unit tests. > > Thanks for working on this, I'm broadly in favor of this change. I > like the way it keeps each test as a function and adds automatic test > registration with support for setup and teardown functions. I am keen > though to keep an emphasis on good diagnostic messages when tests > fail. Looking at the conversions in this series all of the test_msg() > lines that provide useful debugging context are removed. I'm not sure > using yaml to report errors rather than human readable messages is an > improvement either. > > I wonder if we want to either improve the assertions offered by clar > or write our own. I find the names of the cl_assert_equal_?() > functions are a bit cumbersome. The aim of the check_* names was to > try and be both concise and descriptive. Adding our own check_* macros > on top of clar would also make it easier to port our existing tests. > > Here are some thought having read through the assertion and error > reporting code: > > - As I think you've pointed out elsewhere there are no equivalents > for check_int(a, <|<=|>|>=, b) so we're forced to use cl_assert() > and forego the better diagnostic messages that come from a > dedicated comparison macro. We should fix this as a priority. Agreed, this one also feels rather limiting to me. Are you okay with me doing this as a follow-up in case this series lands? > - cl_assert_equal_i() casts its arguments to int whereas check_int() > and check_uint() are careful to avoid truncation and keep the > original signedness (if that's a word). I think that's unlikely to > be a problem with our current test but could trip us up in the > future. Yeah. If it ever becomes a problem we can likely just introduce something like `cl_assert_equal_u()` to achieve the same for unsigned. Both should probably end up casting to `intmax_t` and `uintmax_t`, respectively. > - cl_assert_equal_s() prints each argument as-is. This means > that it passes NULL arguments through to snprintf() which is > undefined according to the C standard. Compare this to check_str() > that is NULL safe and is careful to escape control characters and > add delimiters to the beginning and end of the string to make it > obvious when a string contains leading or trailing whitespace. Good point indeed, and something I'm happy to fix upstream. > - The cl_assert_equal_?() macros lack type safety for the arguments > being compared as they are wrappers around a variadic function. > That could be fixed by having each macros wrap a dedicated > function that wraps clar__fail(). Some of them do indeed, others generate issues. I don't think we have to have dedicated functions, but could do something about this with `__attribute__((format (printf, ...)))`. > - There is no equivalent of test_todo() to mark assertions that are > expected to fail. We're not using that yet in our tests but our > experience with the integration tests suggests that we are likely > to want this in the future. Heh, funny that you mention this. I had this discussion some 6 years ago I think, where I also mentioned that this should exist as a feature. In any case, I agree. > - To me the "sandbox" feature is mis-named as it does not provide any > confinement. It is instead a useful mechanism for running a test in > a temporary directory created from a template. I guess we'll either just have to not use it or ignore that it's named a bit awkwardly. Changing this in clar probably wouldn't work well because other projects may depend on it. > - There are no checks for failing memory allocations - the return > value of calloc() and strdup() are used without checking for NULL. I'll commit to fixing this upstream if this lands. > - The use of longjmp is a bit of a double edged sword as it makes it > easy to leak resources on test failures. I have to say that this is one of the best features of the clar to me. The current test framework we use doesn't, which in theory requires you to always `return` whenever there was an error. But that results in code that is both awful to read and write, so for most of the tests simply don't bother at all. And consequently, the tests are quite likely to cause segfaults once one of the checks fails because we didn't abort running the testcase, but things are broken. In practice, I'd claim that you don't typically care all that much about memory leaks once your basic assertions start to fail. So, things that need addressing and that I'm happy to do as follow ups: - Introduce functions that compare integers. - Improve type safety of the `cl_assert_equal_?()` macros. - Adapt `cl_assert_equal_s()` to handle NULL pointers. - Introduce checks for failing memory allocations. Nice to have would be support for known-failing tests. Patrick