Re: [MAINTAINERS/KERNEL SUMMIT] Trust and maintenance of file systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 07, 2023 at 01:18:27PM +0200, Thorsten Leemhuis wrote:
> On 07.09.23 12:29, Christian Brauner wrote:
> >> So why can't that work similarly for unmaintained file systems? We could
> >> even establish the rule that Linus should only apply patches to some
> >> parts of the kernel if the test suite for unmaintained file systems
> >> succeeded without regressions. And only accept new file system code if a
> > 
> > Reading this mail scared me.
> 
> Sorry about that, I can fully understand that. It's just that some
> statements in this thread sounded a whole lot like "filesystems want to
> opt-out of the no regression rule" to me. That's why I at some point
> thought I had to speak up.

It's the very opposite of that.  We're all highly conscious of not eating
user data.  Which means that filesystem development often grinds to a
halt while we investigatee bugs.  This is why syzbot is so freaking
dangerous.  It's essentially an automated assault on fs developers.
Worse, Google released syzkaller to the public and now we have random
arseholes running it who have "made proprietary changes to it", and have
no idea how to decide if a report from it is in any way useful.

> But what about hfsplus? From hch's initial mail of this thread it sounds
> like that is something users would miss. So removing it without a very
> strong need[1] seems wrong to me. That's why I got involved in this
> discussion.
> 
> [1] e.g. data loss or damage (as mentioned in my earlier mail) or
> substantial security problems (forgot to mentioned them in my earlier mail)

That's the entire problem!  A seemingly innocent change can easily
lose HFS+ data and we wouldn't find out for years because there's no
test-suite.  A properly tested filesystem looks like this:

https://lore.kernel.org/linux-ext4/20230903120001.qjv5uva2zaqthgk2@zlang-mailbox/

I inadvertently introduced a bug in ext4 with 1kB block size; it's
picked up in less than a week, and within a week of the initial report,
it's diagnosed and fixed.

If that same bug had been introduced to HFS+, how long would it have
taken for anyone to find the bug?  How much longer would it have taken
to track down and fix?




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux