In my opinion, it's better to just ignore old warnings. When code is new the warnings are going to be mostly correct. The original author is there and knows what the code does. Someone has the hardware ready to test any changes. High value, low burden. When the code is old only the false positives are left. No one is testing the code. It's low value, high burden. Plus it puts static checker authors in a difficult place because now people have to work around our mistakes. It creates animosity. Now we have to hold ourselves to a much higher standard for false positives. It sounds like I'm complaining and lazy, right? But Oleg Drokin has told me previously that I spend too much time trying to silence false positives instead of working on new code. He's has a point which is that actually we have limited amount of time and we have to make choices about what's the most useful thing we can do. So what I do and what the zero day bot does is we look at warnings one time and we re-review old warnings whenever a file is changed. Kernel developers are very good at addressing static checker warnings and fixing the real issues... People sometimes ask me to create a database of warnings which I have reviewed but the answer is that anything old can be ignored. As I write this, I've had a thought that instead of a database of false positives maybe we should record a database of real bugs to ensure that the fixes for anything real is applied. regards, dan carpenter