On Fri, Jul 31, 2020 at 09:47:13PM +0100, Matthew Wilcox wrote: > On Fri, Jul 31, 2020 at 12:45:17AM +0100, Matthew Wilcox wrote: > > On Fri, Jul 31, 2020 at 08:08:57AM +1000, Dave Chinner wrote: > > > On Thu, Jul 30, 2020 at 02:50:40PM +0100, Matthew Wilcox wrote: > > > > On Thu, Jul 30, 2020 at 09:05:03AM +1000, Dave Chinner wrote: > > > > > On Wed, Jul 29, 2020 at 07:50:35PM +0100, Matthew Wilcox wrote: > > > > > > I had a bit of a misunderstanding. Let's discard that proposal > > > > > > and discuss what we want to optimise for, ignoring THPs. We don't > > > > > > need to track any per-block state, of course. We could implement > > > > > > __iomap_write_begin() by reading in the entire page (skipping the last > > > > > > few blocks if they lie outside i_size, of course) and then marking the > > > > > > entire page Uptodate. > > > > > > > > > > __iomap_write_begin() already does read-around for sub-page writes. > > > > > And, if necessary, it does zeroing of unwritten extents, newly > > > > > allocated ranges and ranges beyond EOF and marks them uptodate > > > > > appropriately. > > > > > > > > But it doesn't read in the entire page, just the blocks in the page which > > > > will be touched by the write. > > > > > > Ah, you are right, I got my page/offset macros mixed up. > > > > > > In which case, you just identified why the uptodate array is > > > necessary and can't be removed. If we do a sub-page write() the page > > > is not fully initialised, and so if we then mmap it readpage needs > > > to know what part of the page requires initialisation to bring the > > > page uptodate before it is exposed to userspace. > > > > You snipped the part of my mail where I explained how we could handle > > that without the uptodate array ;-( Essentially, we do as you thought > > it worked, we read the entire page (or at least the portion of it that > > isn't going to be overwritten. Once all the bytes have been transferred, > > we can mark the page Uptodate. We'll need to wait for the transfer to > > happen if the write overlaps a block boundary, but we do that right now. > > OK, so this turns out to be Hard. We enter the iomap code with > > iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *iter, > const struct iomap_ops *ops) > > which does: > ret = iomap_apply(inode, pos, iov_iter_count(iter), > IOMAP_WRITE, ops, iter, iomap_write_actor); > > so iomap_write_actor doesn't get told about the blocks in the page before > the starting pos. They might be a hole or mapped; we have no idea. So this is a kind of the same problem block size > page size has to deal with for block allocation - the zero-around issue. THat is, when a sub block write triggers a new allocation, it actually has to zero the entire block in the page cache first, which means it needs to expand the IO range in iomap_write_actor().... https://lore.kernel.org/linux-xfs/20181107063127.3902-10-david@xxxxxxxxxxxxx/ https://lore.kernel.org/linux-xfs/20181107063127.3902-14-david@xxxxxxxxxxxxx/ > We could allocate pages _here_ and call iomap_readpage() for the pages > which overlap the beginning and end of the I/O, FWIW, this is effective what calling iomap_zero() from iomap_write_actor() does - it allocates pages outside the write range via iomap_begin_write(), then zeroes them in memory and marks them dirty.... > but I'm not entirely > convinced that the iomap_ops being passed in will appreciate being > called for a read that has no intent to write the portions of the page > outside pos. I don't think it should matter what the range of the read being done is - it has the same constraints whether it's to populate the partial block or whole blocks just before the write. Especially as we are in the buffered write path and so the filesystem has guaranteed us exclusive access to the inode and it's mapping here.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx