Re: AUTOSEL process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 27, 2023 at 05:35:30PM -0500, Sasha Levin wrote:
> > > Note, however, that it's not enough to keep pointing at a tiny set and
> > > using it to suggest that the entire process is broken. How many AUTOSEL
> > > commits introduced a regression? How many -stable tagged ones did? How
> > > many bugs did AUTOSEL commits fix?
> > 
> > So basically you don't accept feedback from individual people, as individual
> > people don't have enough data?
> 
> I'd love to improve the process, but for that we need to figure out
> criteria for what we consider good or bad, collect data, and make
> decisions based on that data.
> 
> What I'm getting from this thread is a few anecdotal examples and
> statements that the process isn't working at all.
> 
> I took Jon's stablefixes script which he used for his previous articles
> around stable kernel regressions (here:
> https://lwn.net/Articles/812231/) and tried running it on the 5.15
> stable tree (just a random pick). I've proceeded with ignoring the
> non-user-visible regressions as Jon defined in his article (basically
> issues that were introduced and fixed in the same releases) and ended up
> with 604 commits that caused a user visible regression.
> 
> Out of those 604 commits:
> 
>  - 170 had an explicit stable tag.
>  - 434 did not have a stable tag.
> 
> Looking at the commits in the 5.15 tree:
> 
> With stable tag:
> 
> 	$ git log --oneline -i --grep "cc.*stable" v5.15..stable/linux-5.15.y | wc -l
> 	3676
> 
> Without stable tag (-96 commits which are version bumps):
> 
> 	$ git log --oneline --invert-grep -i --grep "cc.*stable" v5.15..stable/linux-5.15.y | wc -l
> 	10649
> 
> Regression rate for commits with stable tag: 170 / 3676 = 4.62%
> Regression rate for commits without a stable tag: 434 / 10553 = 4.11%
> 
> Is the analysis flawed somehow? Probably, and I'd happy take feedback on
> how/what I can do better, but this type of analysis is what I look for
> to know if the process is working well or not.

I'm shocked that these are the statistics you use to claim the current AUTOSEL
process is working.  I think they actually show quite the opposite!

First, since many AUTOSEL commits aren't actually fixes but nearly all
stable-tagged commits *are* fixes, the rate of regressions per commit would need
to be lower for AUTOSEL commits than for stable-tagged commits in order for
AUTOSEL commits to have the same rate of regressions *per fix*.  Your numbers
suggest a similar regression rate *per commit*.  Thus, AUTOSEL probably
introduces more regressions *per fix* than stable-tagged commits.

Second, the way you're identifying regression-introducing commits seems to be
excluding one of the most common, maybe *the* most common, cause of AUTOSEL
regressions: missing prerequisite commits.  A very common case that I've seen
repeatedly is AUTOSEL picking just patch 2 or higher of a multi-patch series.
For an example, see the patch that started this thread...  If a missing
prerequisite is backported later, my understanding is that it usually isn't
given a Fixes tag, as the upstream commit didn't have it.  I think such
regressions aren't counted in your statistic, which only looks at Fixes tags.

(Of course, stable-tagged commits sometimes have missing prerequisite bugs too.
But it's expected to be at a lower rate, since the original developers and
maintainers are directly involved in adding the stable tags.  These are the
people who are more familiar than anyone else with prerequisites.)

Third, the category "commits without a stable tag" doesn't include just AUTOSEL
commits, but also non-AUTOSEL commits that people asked to be added to stable
because they fixed a problem for them.  Such commits often have been in mainline
for a long time, so naturally they're expected to have a lower regression rate
than stable-tagged commits due to the longer soak time, on average.  So if the
regression rate of stable-tagged and non-stable-tagged commits is actually
similar, that suggests the regression rate of non-stable-tagged commits is being
brought up artifically by a high regression rate in AUTOSEL commits...

So, I think your statistics actually reflect quite badly on AUTOSEL in its
current form.

By the way, to be clear, AUTOSEL is absolutely needed.  The way you are doing it
currently is not working well, though.  I think it needs to be tuned to select
fewer, higher-confidence fixes, and you need to do some basic checks against
each one, like "does this commit have a pending fix" and "is this commit part of
a multi-patch series, and if so are earlier patches needed as prerequisites".
There also needs to be more soak time in mainline, and more review time.

IMO you also need to take a hard look at whatever neural network thing you are
using, as from what I've seen its results are quite poor...  It does pick up
some obvious fixes, but it seems they could have just as easily been found
through some heuristics with grep.  Beyond those obvious fixes, what it picks up
seems to be barely distinguishable from a random selection.

- Eric



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux