On 2024/8/1 12:24, Matthew Wilcox wrote: > On Thu, Aug 01, 2024 at 09:52:49AM +0800, Zhang Yi wrote: >> On 2024/8/1 0:52, Matthew Wilcox wrote: >>> On Wed, Jul 31, 2024 at 05:13:04PM +0800, Zhang Yi wrote: >>>> Commit '1cea335d1db1 ("iomap: fix sub-page uptodate handling")' fix a >>>> race issue when submitting multiple read bios for a page spans more than >>>> one file system block by adding a spinlock(which names state_lock now) >>>> to make the page uptodate synchronous. However, the race condition only >>>> happened between the read I/O submitting and completeing threads, it's >>>> sufficient to use page lock to protect other paths, e.g. buffered write >>>> path. After large folio is supported, the spinlock could affect more >>>> about the buffered write performance, so drop it could reduce some >>>> unnecessary locking overhead. >>> >>> This patch doesn't work. If we get two read completions at the same >>> time for blocks belonging to the same folio, they will both write to >>> the uptodate array at the same time. >>> >> This patch just drop the state_lock in the buffered write path, doesn't >> affect the read path, the uptodate setting in the read completion path >> is still protected the state_lock, please see iomap_finish_folio_read(). >> So I think this patch doesn't affect the case you mentioned, or am I >> missing something? > > Oh, I see. So the argument for locking correctness is that: > > A. If ifs_set_range_uptodate() is called from iomap_finish_folio_read(), > the state_lock is held. > B. If ifs_set_range_uptodate() is called from iomap_set_range_uptodate(), > either we know: > B1. The caller of iomap_set_range_uptodate() holds the folio lock, and this > is the only place that can call ifs_set_range_uptodate() for this folio > B2. The caller of iomap_set_range_uptodate() holds the state lock > > But I think you've assigned iomap_read_inline_data() to case B1 when I > think it's B2. erofs can certainly have a file which consists of various > blocks elsewhere in the file and then a tail that is stored inline. Oh, you are right, thanks for pointing this out. I missed the case of having both file blocks and inline data in one folio on erofs. So we also need to hold state_lock in iomap_read_inline_data(), it looks like we'd better to introduce a new common helper to do this job for B2. > > __iomap_write_begin() is case B1 because it holds the folio lock, and > submits its read(s) sychronously. Likewise __iomap_write_end() is > case B1. > > But, um. Why do we need to call iomap_set_range_uptodate() in both > write_begin() and write_end()? > > And I think this is actively buggy: > > if (iomap_block_needs_zeroing(iter, block_start)) { > if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE)) > return -EIO; > folio_zero_segments(folio, poff, from, to, poff + plen); > ... > iomap_set_range_uptodate(folio, poff, plen); > > because we zero from 'poff' to 'from', then from 'to' to 'poff+plen', > but mark the entire range as uptodate. And once a range is marked > as uptodate, it can be read from. > > So we can do this: > > - Get a write request for bytes 1-4094 over a hole > - allocate single page folio > - zero bytes 0 and 4095 > - mark 0-4095 as uptodate > - take page fault while trying to access the user address > - read() to bytes 0-4095 now succeeds even though we haven't written > 1-4094 yet > > And that page fault can be uffd or a buffer that's in an mmap that's > out on disc. Plenty of time to make this race happen, and we leak > 4094/4096 bytes of the previous contents of that folio to userspace. > > Or did I miss something? > Indeed, this could happen on the filesystem without inode lock in the buffered read path(I've checked it out on my ext4 buffered iomap branch), and I guess it could also happen after a short copy happened in the write path. We don't need iomap_set_range_uptodate() for the zeroing case in __iomap_write_begin(). Thanks, Yi.