On Thu, Apr 16, 2020 at 07:31:25PM +0000, Saeed Mahameed wrote:
On Thu, 2020-04-16 at 19:20 +0200, Greg KH wrote:
So far the AUTOSEL tool has found so many real bugfixes that it isn't
funny. If you don't like it, fine, but it has proven itself _way_
beyond my wildest hopes already, and it just keeps getting better.
Now i really don't know what the right balance here, in on one hand,
autosel is doing a great job, on the other hand we know it can screw up
in some cases, and we know it will.
So we decided to make sacrifices for the greater good ? :)
autosel is going to screw up, I'm going to screw up, you're going to
screw up, and Linus is going screw up. The existence of the stable trees
and a "Fixes:" tag is an admission we all screw up, right?
If you're willing to accept that we all make mistakes, you should also
accept that we're making mistakes everywhere: we write buggy code, we
fail at reviews, we forget tags, and we suck at backporting patches.
If we agree so far, then why do you assume that the same people who do
the above also perfectly tag their commits, and do perfect selection of
patches for stable? "I'm always right except when I'm wrong".
My view of the the path forward with stable trees is that we have to
beef up our validation and testing story to be able to catch these
issues better, rather than place arbitrary limitations on parts of the
process. To me your suggestions around the Fixes: tag sound like "Never
use kmalloc() because people often forget to free memory!" will it
prevent memory leaks? sure, but it'll also prevent useful patches from
coming it...
Here's my suggestion: give us a test rig we can run our stable release
candidates through. Something that simulates "real" load that customers
are using. We promise that we won't release a stable kernel if your
tests are failing.
--
Thanks,
Sasha