Hello Mark, On Thursday, January 27, 2022 5:37:29 AM EST Mark Wielaard wrote: > On Thu, Jan 27, 2022 at 10:41:36AM +0100, Roberto Ragusa wrote: > > On 1/22/22 10:05 PM, Mark Wielaard wrote: > > > So I would give valgrind a 6/6 (100%) score :) > > > > But if the compiler starts copying zeros on uninitialized memory, > > valgrind loses any ability to detect the defect in the code. > > > Yes. So that is the compromise. You'll always get initialized zeros > for local variables, so any usage is defined (though probably buggy). > But some of the tools, like valgrind memcheck, will be unable to > detect the buggy code. I think people doing debug probably have a special set of compile flags. I do. > If you believe the tools we have are pretty bad to begin with and/or > not actually used by people to find such bugs then this is a good > compromise. Nobody has said the tools are "bad". But a couple things have been discovered. One is that the amount of warnings and detections depends on not optimizing. That is also in conflict with detecting run time problems that FORTIFY_SOURCE would have picked up because it only works when optimizing is on. Valgrind is a valuable tool. I use it all the time to find where something segfaults. (I was active on that mail list around 2003.) I also use it in conjunction with radamsa for fuzzing sometimes. But using it to find uninitialized variables means that you have to traverse the path that leads to the problem. This is not trivial for any medium to large project. So, the need really falls to static analysis/compiler warnings. > If you believe the tools are pretty good for detecting > these issues (and I believe they are, the example given was just > unfortunate because some of the issues weren't actually bad code and > some others were rightfully optimized out, so would never trigger), > then it is a bad compromise. But we definitely need to encourage > people to use the tools more. Yes. But at this moment, we need a safety net until detection gets better. The bugs I put into the test program comes from observing a lot a kernel CVE's that I have to create assurance arguments for. They are generally caught by fuzzing. And many are non-obvious because initialization happens in one function and used in another funtion, while the variable is stored in yet another function. To get detection better, one needs a curated set of bugs to test against. NIST's SARD dataset helps tool developers do just that. But it doesn't have every conceivable variation. The tools are steadily getting better. But I think we need the safety net in the mean time. Best Regards, -Steve _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure