Re: [MAINTAINERS/KERNEL SUMMIT] Trust and maintenance of file systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[disclaimer: while I agree with many things Christoph, Dave, and Willy
said in this thread, I at the same time feel that someone needs to take
a stance for our "no regressions rule" here and act as its advocate. I
mean, Linus calls it our "#1 rule"; but sure, at the same time it's of
course of similar or higher importance that the Kernel does not loose or
damage any data users entrusted it, as the Kernel otherwise might be "a
pointless piece of code that you might as well throw away"[1].]

On 07.09.23 05:26, Matthew Wilcox wrote:
> On Wed, Sep 06, 2023 at 10:51:39PM -0400, Steven Rostedt wrote:
>> I guess the point I'm making is, what's the burden in keeping it around in
>> the read-only state? It shouldn't require any updates for new features,
>> which is the complaint I believe Willy was having.
> 
> Old filesystems depend on old core functionality like bufferheads.
> 
> We want to remove bufferheads.
> 
> Who has the responsibility for updating those old filesystmes to use
> iomap instead of bufferheads?
>
> Who has the responsibility for testing those filesystems still work
> after the update?
>
> Who has the responsibility for looking at a syzbot bug report that comes
> in twelve months after the conversion is done and deciding whether the
> conversion was the problem, or whether it's some other patch that
> happened before or after?

Isn't the answer to those question the usual one: if you want to change
an in-kernel API, you have to switch all in-kernel users (or mark them
as broken and remove them later, if they apparently are not used anymore
in the wild), and deal with the fallout if a reliable bisection later
says that a regression is caused by a chance of yours?

The only thing slightly special is the testing story, as those for
things like drivers it is a whole lot simpler: developers there can get
away with only little or no testing, as the risk of data loss or damage
is extremely small.

But well, changes to arch/ or mm/ code can lead to data damage or loss
on rare or unsupported environments as well. All those CI systems out
there that test the kernel in various environments help to catch quite a
few of those problems before regular users run into them.

So why can't that work similarly for unmaintained file systems? We could
even establish the rule that Linus should only apply patches to some
parts of the kernel if the test suite for unmaintained file systems
succeeded without regressions. And only accept new file system code if a
test suite that is easy to integrate in CI systems exists (e.g.
something smaller and faster than what the ext4 and xfs developers run
regularly, but smaller and faster should likely be good enough here).

Ciao, Thorsten

[1] that's something Linus once said in the context of a regression, but
I think it fits here



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux