Re: [PATCH] fs: prevent data-race due to missing inode_lock when calling vfs_getattr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 18, 2024 at 03:00:39PM +0900, Jeongjun Park wrote:
> 
> Hello,
> 
> > Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
> > 
> > On Mon, Nov 18, 2024 at 01:37:19AM +0900, Jeongjun Park wrote:
> >> Many filesystems lock inodes before calling vfs_getattr, so there is no
> >> data-race for inodes. However, some functions in fs/stat.c that call
> >> vfs_getattr do not lock inodes, so the data-race occurs.
> >> 
> >> Therefore, we need to apply a patch to remove the long-standing data-race
> >> for inodes in some functions that do not lock inodes.
> > 
> > Why do we care?  Slapping even a shared lock on a _very_ hot path, with
> > possible considerable latency, would need more than "theoretically it's
> > a data race".
> 
> All the functions that added lock in this patch are called only via syscall,
> so in most cases there will be no noticeable performance issue.

Pardon me, but I am unable to follow your reasoning.

> And
> this data-race is not a problem that only occurs in theory. It is
> a bug that syzbot has been reporting for years. Many file systems that
> exist in the kernel lock inode_lock before calling vfs_getattr, so
> data-race does not occur, but only fs/stat.c has had a data-race
> for years. This alone shows that adding inode_lock to some
> functions is a good way to solve the problem without much 
> performance degradation.

Explain.  First of all, these are, by far, the most frequent callers
of vfs_getattr(); what "many filesystems" are doing around their calls
of the same is irrelevant.  Which filesystems, BTW?  And which call
chains are you talking about?  Most of the filesystems never call it
at all.

Furthermore, on a lot of userland loads stat(2) is a very hot path -
it is called a lot.  And the rwsem in question has a plenty of takers -
both shared and exclusive.  The effect of piling a lot of threads
that grab it shared on top of the existing mix is not something
I am ready to predict without experiments - not beyond "likely to be
unpleasant, possibly very much so".

Finally, you have not offered any explanations of the reasons why
that data race matters - and "syzbot reporting" is not one.  It is
possible that actual observable bugs exist, but it would be useful
to have at least one of those described in details.

Please, spell your reasoning out.  Note that fetch overlapping with
store is *NOT* a bug in itself.  It may become such if you observe
an object in inconsistent state - e.g. on a 32bit architecture
reading a 64bit value in parallel with assignment to the same may
end up with a problem.  And yes, we do have just such a value
read there - inode size.  Which is why i_size_read() is used there,
with matching i_size_write() in the writers.

Details matter; what is and what is not an inconsistent state
really does depend upon the object you are talking about.
There's no way in hell for syzbot to be able to determine that.




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux