Re: [MAINTAINERS/KERNEL SUMMIT] Trust and maintenance of file systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2023-09-09 at 16:44 +0100, Matthew Wilcox wrote:
> On Sat, Sep 09, 2023 at 08:50:39AM -0400, James Bottomley wrote:
> > On Wed, 2023-09-06 at 00:23 +0100, Matthew Wilcox wrote:
> > > On Wed, Sep 06, 2023 at 09:06:21AM +1000, Dave Chinner wrote:
> > [...]
> > > > > E.g. the hfsplus driver is unmaintained despite collecting
> > > > > odd fixes. It collects odd fixes because it is really useful
> > > > > for interoperating with MacOS and it would be a pity to
> > > > > remove it.  At the same time it is impossible to test changes
> > > > > to hfsplus sanely as there is no mkfs.hfsplus or fsck.hfsplus
> > > > > available for Linux.  We used to have one that was ported
> > > > > from the open source Darwin code drops, and I managed to get
> > > > > xfstests to run on hfsplus with them, but this old version
> > > > > doesn't compile on any modern Linux distribution and new
> > > > > versions of the code aren't trivially portable to Linux.
> > > > > 
> > > > > Do we have volunteers with old enough distros that we can
> > > > > list as testers for this code?  Do we have any other way to
> > > > > proceed?
> > > > > 
> > > > > If we don't, are we just going to untested API changes to
> > > > > these code bases, or keep the old APIs around forever?
> > > > 
> > > > We do slowly remove device drivers and platforms as the
> > > > hardware, developers and users disappear. We do also just
> > > > change driver APIs in device drivers for hardware that no-one
> > > > is actually able to test. The assumption is that if it gets
> > > > broken during API changes, someone who needs it to work will
> > > > fix it and send patches.
> > > > 
> > > > That seems to be the historical model for removing
> > > > unused/obsolete code from the kernel, so why should we treat
> > > > unmaintained/obsolete filesystems any differently?  i.e. Just
> > > > change the API, mark it CONFIG_BROKEN until someone comes along
> > > > and starts fixing it...
> > > 
> > > Umm.  If I change ->write_begin and ->write_end to take a folio,
> > > convert only the filesystems I can test via Luis' kdevops and
> > > mark the rest as CONFIG_BROKEN, I can guarantee you that Linus
> > > will reject that pull request.
> > 
> > I think really everyone in this debate needs to recognize two
> > things:
> > 
> >    1. There are older systems out there that have an active group
> > of
> >       maintainers and which depend on some of these older
> > filesystems
> >    2. Data image archives will ipso facto be in older formats and
> >       preserving access to them is a historical necessity.
> 
> I don't understand why you think people don't recognise those things.

Well, people recognize them as somebody else's problem, yes, like
virtualization below.

> > So the problem of what to do with older, less well maintained,
> > filesystems isn't one that can be solved by simply deleting them
> > and we have to figure out a way to move forward supporting them
> > (obviously for some value of the word "support"). 
> > 
> > By the way, people who think virtualization is the answer to this
> > should remember that virtual hardware is evolving just as fast as
> > physical hardware.
> 
> I think that's a red herring.  Of course there are advances in
> virtual hardware for those who need the best performance.  But
> there's also qemu's ability to provide to you a 1981-vintage PC (or
> more likely a 2000-era PC).  That's not going away.

So Red Hat dropping support for the pc type (alias i440fx)

https://bugzilla.redhat.com/show_bug.cgi?id=1946898

And the QEMU deprecation schedule

https://www.qemu.org/docs/master/about/deprecated.html

showing it as deprecated after 7.0 are wrong?  That's not to say
virtualization can't help at all; it can certainly lengthen the time
horizon, it's just not a panacea.

> > > I really feel we're between a rock and a hard place with our
> > > unmaintained filesystems.  They have users who care passionately,
> > > but not the ability to maintain them.
> > 
> > So why is everybody making this a hard either or? The volunteer
> > communities that grow around older things like filesystems are
> > going to be enthusiastic, but not really acquainted with the
> > technical intricacies of the modern VFS and mm. Requiring that they
> > cope with all the new stuff like iomap and folios is building an
> > unbridgeable chasm they're never going to cross. Give them an
> > easier way and they might get there.
> 
> Spoken like someone who has been paying no attention at all to what's
> going on in filesystems.

Well, that didn't take long;  one useful way to reduce stress on
everyone is actually to reduce the temperature of the discourse.

>   The newer APIs are easier to use.  The problem is understanding
> what the hell the old filesystems are doing with the old APIs.

OK, so we definitely have some filesystems that were experimental at
the time and pushed the boundaries, but not every (or even the
majority) of the older filesystems fall into this category.

> Nobody's interested.  That's the problem.  The number of filesystem
> developers we have is shrinking.  

What I actually heard was that there's communities of interested users,
they just don't get over the hump of becoming developers.  Fine, I get
it that a significant number of users will never become developers, but
that doesn't relieve us of the responsibility for lowering the barriers
for the small number that have the capacity.

> There hasn't been an HFS maintainer since 2011, and it wasn't a
> problem until syzbot decreed that every filesystem bug is a security
> bug.  And now, who'd want to be a fs maintainer with the automated
> harassment?

OK, so now we've strayed into the causes of maintainer burnout.  Syzbot
is undoubtedly a stressor, but one way of coping with a stressor is to
put it into perspective: Syzbot is really a latter day coverity and
everyone was much happier when developers ignored coverity reports and
they went into a dedicated pile that was looked over by a team of
people trying to sort the serious issues from the wrong but not
exploitable ones.  I'd also have to say that anyone who allows older
filesystems into customer facing infrastructure is really signing up
themselves for the risk they're running, so I'd personally be happy if
older fs teams simply ignored all the syzbot reports.

> Burnout amongst fs maintainers is a real problem.  I have no idea how
> to solve it.

I already suggested we should share coping strategies:

https://lore.kernel.org/ksummit/ab9cfd857e32635f626a906410ad95877a22f0db.camel@xxxxxxxxxxxxxxxxxxxxx/

The sources of stress aren't really going to decrease, but how people
react to them could change.  Syzbot (and bugs in general) are a case in
point.  We used not to treat seriously untriaged bug reports, but now
lots of people feel they can't ignore any fuzzer report.  We've tipped
to far into "everything's a crisis" mode and we really need to come
back and think that not every bug is actually exploitable or even
important.  We should go back to  requiring an idea how important the
report is before immediately acting on it.  Perhaps we should also go
back to seeing if we can prize some resources out of the major
moneymakers in the cloud space.  After all, a bug that could cause a
cloud exploit might not be even exploitable on a personal laptop that
has no untrusted users.  So if we left it to the monied cloud farms 
to figure out how to get us a triage of the report and concentrated on
fixing say only the obvious personal laptop exploits, that might be a
way of pushing off some of the stressors.

James





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux