Re: Unprivileged filesystem mounts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 11, 2025 at 04:57:54PM +1100, Dave Chinner wrote:
> On Mon, Mar 10, 2025 at 10:19:57PM -0400, Demi Marie Obenour wrote:
> > People have stuff to get done.  If you disallow unprivileged filesystem
> > mounts, they will just use sudo (or equivalent) instead.
> 
> I am not advocating that we disallow mounting of untrusted devices.
> 
> > The problem is
> > not that users are mounting untrusted filesystems.  The problem is that
> > mounting untrusted filesystems is unsafe.
> 
> > Making untrusted filesystems safe to mount is the only solution that
> > lets users do what they actually need to do. That means either actually
> > fixing the filesystem code,
> 
> Yes, and the point I keep making is that we cannot provide that
> guarantee from the kernel for existing filesystems. We cannot detect
> all possible malicous tampering situations without cryptogrpahically
> secure verification, and we can't generate full trust from nothing.

Why is it not possible to provide that guarantee?  I'm not concerned
about infinite loops or deadlocks.  Is there a reason it is not possible
to prevent memory corruption?

> The typical desktop policy of "probe and automount any device that
> is plugged in" prevents the user from examining the device to
> determine if it contains what it is supposed to contain.  The user
> is not given any opportunity to device if trust is warranted before
> the kernel filesystem parser running in ring 0 is exposed to the
> malicious image.
> 
> That's the fundamental policy problem we need to address: the user
> and/or admin is not in control of their own security because
> application developers and/or distro maintainers have decided they
> should not have a choice.
> 
> In this situation, the choice of what to do *must* fall to the user,
> but the argument for "filesystem corruption is a CVE-worthy bug" is
> that the choice has been taken away from the user. That's what I'm
> saying needs to change - the choice needs to be returned to the
> user...

I am 100% in favor of not automounting filesystems without user
interaction, but that only means that an exploit will require user
interaction.  Users need to get things done, and if their task requires
them to a not-fully-trusted filesystem image, then that is what they
will do, and they will typically do it in the most obvious way possible.
That most obvious way needs to be a safe way, and it needs to have good
enough performance that users don't go around looking for an unsafe way.

> > or running it in a sufficiently tight
> > sandbox that vulnerabilities in it are of too low importance to matter.
> > libguestfs+FUSE is the most obvious way to do this, but the performance
> > might not be enough for distros to turn it on.
> 
> Yes, I have advocated for that to be used for desktop mounts in the
> past. Similarly, I have also advocated for liblinux + FUSE to be
> used so that the kernel filesystem code is used but run from a
> userspace context where the kernel cannot be compromised.
> 
> I have also advocated for user removable devices to be encrypted by
> default. The act of the user unlocking the device automatically
> marks it as trusted because undetectable malicious tampering is
> highly unlikely.

That is definitely a good idea.

> I have also advocated for a device registry that records removable
> device signatures and whether the user trusted them or not so that
> they only need to be prompted once for any given removable device
> they use.
> 
> There are *many* potential user-friendly solutions to the problem,
> but they -all- lie in the domain of userspace applications and/or
> policies. This is *not* a problem more or better code in the kernel
> can solve.

It is certainly possible to make a memory safe implementation of amy
filesystem.  If the current implementation can't prevent memory
corruption if a malicious filesystem is mounted, that is a
characteristic of the implementation.

> Kees and Co keep telling us we should be making changes that make it
> harder (or compeltely prevent) entire classes of vulnerabilities
> from being exploited. Yet every time we suggest that a more secure
> policy should be applied to automounting filesystems to prevent
> system compromise on device hotplug, nobody seems to be willing to
> put security first.

Not automounting filesystems on hotplug is a _part_ of the solution.
It cannot be the _entire_ solution.  Users sometimes need to be able to
interact with untrusted filesystem images with a reasonable speed.

> > For ext4 and F2FS, if there is a vulnerability that can be exploited by
> > a malicious filesystem image, it is a verified boot bypass for Chrome OS
> > and Android, respectively. Verified boot is a security boundary for
> > both of them,
> 
> How does one maliciously corrupt the root filesystem on an Android
> phone? How many security boundaries have to be violated before
> an attacker can directly modify the physical storage underlying the
> read-only system partition?
> 
> Again, if the attacker has device modification capability, why
> would they bother trying to perform a complex filesystem
> corruption attack during boot when they can simply modify what
> runs on startup?
> 
> And is this a real attack vector that Android must defend against,
> why isn't that device and filesystem image cryptographically signed
> and verified at boot time to prevent such attacks? That will prevent
> the entire class of malicious tampering exploits completely without
> having to care about undiscovered filesystem bugs - that's a much
> more robust solution from a verified boot and system security
> perspective...

On both Android and ChromeOS, the root filesystem is a dm-verity volume,
and the Merkle tree hash is either signed or is part of the signed
kernel image.  The signed kernel image is itself verified by the
bootloader.  Therefore, the root filesystem cannot be tampered with.

However, the root filesystem is not the only filesystem image that must
be mounted.  There is also a writable data volume, and that _cannot_ be
signed because it contains user data.  It is encrypted, but part of the
threat model for both Android and ChromeOS is an attacker who has gained
root or even kernel code execution and wants to retain their access
across device reboots.  They can't tamper with the kernel or root
filesystem, and privileged userspace treats the data on the writable
filesystem as untrusted.  However, the attacker can replace the writable
filesystem image with anything they want, so the if they can craft an
image that gains kernel code execution the next time the system boots,
they have successfully obtained persistance.

Also, at least Google Pixels support updating the OS via the bootloader.
The bootloader checks that the image was signed by the OS vendor
(generally, but not always, Google), and I believe it also checks for
downgrade attacks.  However, this means of updating the OS doesn't
wipe user data.  This means that if an attacker has gained code
execution with root or even kernel privileges, updating the OS to a
version that has patched the vulnerability the attacker used will revoke
their access.  The same is true if the attacker used USB for their
exploit and the reboot happens after the user has unplugged the USB
device.

Furthermore, on UEFI systems the EFI System Partition cannot be
cryptographically protected as the firmware does not support this.

> > so just forward syzbot reports to their respective
> > security teams and let them do the jobs they are paid to do.
> 
> Security teams don't fix "syzbot bugs"; they are typically the
> people that run syzbot instances. It's the developers who then
> have to triage and fix the issues that are found, so that's who the
> bug reports should go to (and do). And just because syzbot finds an
> issue, that doesn't make it a security issue - all it is is another
> bug found by another automated test suite that needs fixing.

Browser vendors consider many kinds of memory unsafety problems to be
exploitable until and unless proven otherwise.  My understanding is that
experience has proven them to be correct in this regard.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux