> So why can't that work similarly for unmaintained file systems? We could > even establish the rule that Linus should only apply patches to some > parts of the kernel if the test suite for unmaintained file systems > succeeded without regressions. And only accept new file system code if a Reading this mail scared me. The list of reiserfs bugs alone is crazy. And syzbot keeps piling them on. It can't even succeed an xfstests run without splatting all over the place last I checked. And there's no maintainer for it. We'll pick up patches if we get sent them but none of the vfs maintainers and reviewers has the bandwith to take care of rotting filesystems and their various ailments. Yes, we should have a discussion under what circumstances we can remove a filesystem. I think that's absolutely what we should do and we should nudge userspace to stop compiling known orphaned filesystems. If most distros have stopped compiling support for a filesystem then I think that's a good indication that we can at least start to talk about how to remove it. And we should probably tell distros that a filesystem is orphaned and unmaintained more aggressively. But even if we decide or it is decided for us that we have to keep such old filesystems in tree forever then the contract with userspaces must be that such filesystems are zombies. They should however not become an even bigger burden or obstacle to improve actively maintained filesystems or the vfs than they are already. I think it's also worth clarifying something: Right now, everyone who does fs wide changes does their absolute best to account for every filesytem that's in the tree. And for people not familiar or even refusing to care about any other filesystems the maintainers and reviewers will remind them about consequences for other filesystems as far as they have that knowledge. And that's already a major task. For every single fs/ wide change we try to make absolutely sure that if it regresses anything - even the deadest-of-dead filesystems - it will be fixed as soon as we get a report. That's what we did for the superblock rework this cycle, the posix acl rework last cycles, the timestamp patches, the freezing patches. But it is very scary to think that we might be put even more under the yoke of dead filesystems. They put enough of a burden on us by not just having to keep the filesystems itself around but quite often legacy infrastructure and hacks in various places. The burden of unmaintained filesystems is very very real. fs/ wide changes are very costly in development time. > test suite that is easy to integrate in CI systems exists (e.g. > something smaller and faster than what the ext4 and xfs developers run > regularly, but smaller and faster should likely be good enough here). The big question of course is who is going to do that? We have a large number of filesystems. And only a subset of them is integrated or even integratable with xfstests. And xfstests is the standard for fs testing. So either a filesystem is integrated with xfstests and we can test it or it isn't and we can't. And if a legacy filesystem becomes integrated then someone needs to do the work to determine what the baseline of tests is that need to pass and then fix all bugs to get to a clean baseline run. That'll be a fulltime job for quite a while I would expect. Imho, mounting an unmaintained filesystem that isn't integrated with xfstests is a gamble with your data. (And what really I would rather see happen before that is that we get stuff like vfs.git to be auto-integrated with xfstests runs/CI at some point.)