Re: [PATCH v6 3/8] rust: file: add Rust abstraction for `struct file`

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > So what you're getting at seems to be that some process has a private
> > file descriptor table and an just opened @fd to a @file that isn't
> > shared.
> > 
> > 	fd = open("/my/file");
> > 
> > and then let's say has a random ioctl(fd, SOMETHING_SOMETHING) that
> > somehow does:
> > 
> > 	struct fd fd = fdget_pos();
> > 	if (!fd.file)
> > 		return -EBADF;
> > 
> > We know that process has used a light reference count and that it didn't
> > acquire f_pos_lock.
> > 
> > Your whole approach seems to assume that after something like this has
> > happened the same process now offloads that struct file to another
> > process that somehow ends up doing some other operation on the file that
> > would also require f_pos_lock to be taken but it doesn't like a read or
> > something.
> 
> Can we not have a data race even if the other process *does* take the
> f_pos_lock mutex? The current thread did not take the mutex, so if the
> current thread touches the file position after sending the file
> reference, then that could race with the other process even if the
> other process takes f_pos_lock.

Yes, for that the original sender would have to have taken a light
reference, and the file mustn't have been in any other file descriptor
table and then the original opener must've sent it while in fdget_pos
scope and remain in that scope while the receiver installs the fd into
their fd table and then leaves and reenters the kernel to e.g., read.

> > To share a file between multiple processes would normally always require
> > that the process sends that file to another process. That process then
> > install that fd into its file descriptor table and then later accesses
> > that file via fdget() again. That's the standard way of doing it -
> > binder does it that way too. And that's all perfectly fine.
> 
> And similarly, if the remote process installs the file, returns to
> userspace, and then userspace calls back into the kernel, which enters
> an fdget_pos scope and modifies the file position. Then this can also
> race on the file position if the original process changes the file
> position it after sending the file reference.

If that process were to send it from within an fdget_pos scope then yes. 

> *That's* the data race that this is trying to prevent.
> 
> > What you would need for this to be a problem is for a process be sent a
> > struct file from a process that is in the middle of an f_pos_lock scope
> > and for the receiving process to immediately start doing stuff that
> > would normally require f_pos_lock.
> > 
> > Like, idk vfs_read(file, ...).
> > 
> > If that's what this is about then yes, there's a close-to-zero but
> > non-zero possibility that some really stupid code could end up doing
> > something like this.
> > 
> > Basically, that scenario doesn't exist (as I've mentioned before)
> > currently. It only exists in a single instance and that's when
> > pidfd_getfd() is used to steal a file from another task while that task
> > is in the middle of an f_pos_lock section (I said it before we don't
> > care about that because non-POSIX interface anyway and we have ptrace
> > rights anyway. And iiuc that wouldn't even be preventable in your
> > current scheme because you would need to have the information available
> > that the struct file you're about to steal from the file descriptor
> > table is currently within an f_pos_lock section.).
> > 
> > Is it honestly worth encoding all that complexity into rust's file
> > implementation itself right now? It's barely understandable to
> > non-rust experts as it is right now. :)
> > 
> > Imho, it would seem a lot more prudent to just have something simpler
> > for now.
> 
> The purpose of the changes I've made are to prevent data races on the
> file position. If we go back to what we had before, then the API does
> not make it impossible for users of the API to cause such data races.
> 
> That is the tradeoff.

Right. Sorry, there's some back and forth here. But we're all navigating
this new territory here and it's not always trivial to see what the
correct approach is.

I wonder what's better for now. It seems that the binder code isn't
really subject to the races we discussed. So maybe we should start with
the simpler approach for now to not get bogged down in encoding all
subtle details into rust's file wrapper just yet?




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux