On Mon, 2023-10-23 at 17:08 +0300, Dan Carpenter wrote: > After we make these two changes the bug is detected. It finds quite > a > few bugs that way. The MAYBE_FREED changes generate quite a few > false > positives so I haven't decided out how to deal with that yet... I think the way to deal with this is "whitelisting"? For any established project as you introduce new checks, there are going to be false positives. You run the check for the first time and (assuming you don't get a gazillion hits) then separate the results into "valid" and "invalid". Then the valid ones get patches and invalid ones are marked somehow (could be coverity-style markup, some external database or I am sure a number of other methods. For things I care about I keep a local database I filter things through). >From then on only new occurrences (result of code changes) will ever surface and get triaged the same way. And in this case even false positives might not be as bad. If we remember the original coverity paper (that also was when the original smatch was inspired I think), they have a very astute observation that "sure, there are false positives, but as we were investigating them we found other bugs. So if your code is confusing enough to throw off analysis tools off, it's likely confusing enough to host bugs of some kind anyway". Sadly I think the focus shifted a lot since then into "as few of false positives as possible" and other such pie in the sky matters (not to say some sort of balance is not important of course). After all we all know "computers are stupid", but when doing code reviews (esp. of large patches) it's quite easy to lose concentration. So anything that prompts one to increase focus and perform additional investigations is good. Bye, Oleg