It seems a natural assumption to me that software that generates lots of compiler warnings from gcc is likely to be more buggy than software which does not. My 'gut feeling' is that if a developer has taken the trouble to write code that does not generate compiler warnings, he has probably taken care of other parts of his software too. But is anyone aware of any published literature proving this fact? We might all suspect/guess it's true, but that is not proof. I found this article: http://www.springerlink.com/content/317x276846767585/ "Empirical analysis on the correlation between GCC compiler warnings and revision numbers of source files in five industrial software projects" I can't actually read it without paying, but part of the abstract says "We use such correlation to conclude that compiler warnings may be used as an indicator for the presence of software defects in source code. " Dave