Re: [MAINTAINERS/KERNEL SUMMIT] Trust and maintenance of file systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/30/23 07:07, Christoph Hellwig wrote:
Hi all,

we have a lot of on-disk file system drivers in Linux, which I consider
a good thing as it allows a lot of interoperability.  At the same time
maintaining them is a burden, and there is a lot expectation on how
they are maintained.

Part 1: untrusted file systems

There has been a lot of syzbot fuzzing using generated file system
images, which I again consider a very good thing as syzbot is good
a finding bugs.  Unfortunately it also finds a lot of bugs that no
one is interested in fixing.   The reason for that is that file system
maintainers only consider a tiny subset of the file system drivers,
and for some of them a subset of the format options to be trusted vs
untrusted input.  It thus is not just a waste of time for syzbot itself,
but even more so for the maintainers to report fuzzing bugs in other
implementations.

What can we do to only mark certain file systems (and format options)
as trusted on untrusted input and remove a lot of the current tension
and make everyone work more efficiently?  Note that this isn't even
getting into really trusted on-disk formats, which is a security
discussion on it's own, but just into formats where the maintainers
are interested in dealing with fuzzed images.

Part 2: unmaintained file systems

A lot of our file system drivers are either de facto or formally
unmaintained.  If we want to move the kernel forward by finishing
API transitions (new mount API, buffer_head removal for the I/O path,
->writepage removal, etc) these file systems need to change as well
and need some kind of testing.  The easiest way forward would be
to remove everything that is not fully maintained, but that would
remove a lot of useful features.

E.g. the hfsplus driver is unmaintained despite collecting odd fixes.
It collects odd fixes because it is really useful for interoperating
with MacOS and it would be a pity to remove it.  At the same time
it is impossible to test changes to hfsplus sanely as there is no
mkfs.hfsplus or fsck.hfsplus available for Linux.  We used to have
one that was ported from the open source Darwin code drops, and
I managed to get xfstests to run on hfsplus with them, but this
old version doesn't compile on any modern Linux distribution and
new versions of the code aren't trivially portable to Linux.

Do we have volunteers with old enough distros that we can list as
testers for this code?  Do we have any other way to proceed?

If we don't, are we just going to untested API changes to these
code bases, or keep the old APIs around forever?


In this context, it might be worthwhile trying to determine if and when
to call a file system broken.

Case in point: After this e-mail, I tried playing with a few file systems.
The most interesting exercise was with ntfsv3.
Create it, mount it, copy a few files onto it, remove some of them, repeat.
A script doing that only takes a few seconds to corrupt the file system.
Trying to unmount it with the current upstream typically results in
a backtrace and/or crash.

Does that warrant marking it as BROKEN ? If not, what does ?

Guenter




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux