On Sun, Jan 05, 2020 at 04:49:32AM +0100, Evan Rudford wrote: > The problem of underfunding plagues many open source projects. Does it? Citation please :) And compared to what exactly? > I wonder whether the Linux kernel suffers from underfunding in > comparison to its global reach. Does it? Again, specifics would be great to have. > Although code reviews and technical discussions are working well, I > argue that the testing infrastructure of the kernel is lacking. Does it? No one can argue we are "doing to much testing", and more testing is always wanted, and happening, can you help with that effort? > Severe bugs are discovered late, and they are discovered by developers > that should not be exposed to that amount of breakage. Specifics please. Remember that Linux runs on _EVERYTHING_ so testing on _EVERYTHING_ is sometimes a bit hard and bugs only show up later on when people get around to running newer kernels on their specific hardware/workload. > Moreover, I feel that security issues do not receive enough resources. Again, citation please? I would argue that right now we have too many people/resources working on security issues that are really really minor in the overall scheme of things. What specific "security issues" are not currently being addressed? > I argue that the cost of those bugs is vastly higher than the cost > that it would take to setup a better quality assurance. Why do you think that? > With sufficient funding, the kernel might do all of the following: Define "sufficient" :) > - Make serious efforts to rewrite code with a bad security track > record, instead of only fixing security vulnerabilities on an ad hoc > basis. What code do you think meets this criteria? > - Although the kernel will always remain in C, make serious efforts to > introduce a safe language for kernel modules and perhaps for some > subsystems. That is already happening for those people that really like those types of languages. Why not help them out with that effort as it seems to be going slowly. > - Build an efficient continuous integration (CI) infrastructure. What is wrong with the one(s) that we currently have and rely on today? > - Run a fast subset of the CI tests as a gatekeeper for all patch sets. Um, this already happens, what needs to be added? What tests are not being run that would catch issues? Why not add them to the existing tools we all use today? > - Run strict CI tests to ensure that userspace compatibility does not break. What tests are those that are not being run today? > - Run CI tests not only in virtual environments, but also on real hardware. That's happening today, what specific platforms/hardware is not being tested in this manner? > - Run CI tests that aim to detect performance regressions. Again, we are doing that, what tests need to be added to the tools? > I realize that some companies are already running kernel testing > infrastructure like this. Exactly :) > However, the development process seems to either lack the resources or > the willingness to build a better quality assurance? Why do you think this? Again, specifics please. greg k-h