On Fri, Jul 29, 2016 at 3:19 AM, Borislav Petkov <bp@xxxxxxxxx> wrote: > > So this is exactly the problem: we should not fix perfectly fine code > just so that gcc remains quiet. So when you say "fixed false positives" > you actually mean, "changed it so that gcc -Wmaybe-u... doesn't fire" > right? > > And we should not do that. It's perfectly fine to do that when it makes sense and doesn't make the code worse. Adding a few unnecessary initializations to make the compiler shut up is not a problem. But in the cases I looked at, that *really* didn't make sense. The pattern was along the lines of struct something var; if (initialize_var(&var) < 0) return error; .. use "var.xyz" .. - gcc complains that "var.xyz" may be uninitialized and quite frankly,. the code made sense, and adding crazy initializations for the fact that gcc has a shit-for-brains warning didn't work well seemed to just make the code worse. And there was no sane *pattern* to why some cases warned. We have things like the above in many places. The issue seems to be that "initialize_var()" needs to be inlined (automatically or explicitly asked for), and then the error flow in the init function is just complex enough. At the point where it doesn't make sense when to initialize things explicitly, and it changes randomly depending on compiler version and compiler command line flags, there is *no* sane way to work around it. We could do whack-a-mole with random code cases, but I really feel that when the warning is that unreliable and the changes to the source code to make the broken compiler warning shut up are completely arbitrary and random, it's worse than useless. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kbuild" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html