Re: [PATCH 2/2] iomap: zero cached pages over unwritten extents on zero range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 19, 2020 at 02:01:44PM -0400, Brian Foster wrote:
> On Mon, Oct 19, 2020 at 12:55:19PM -0400, Brian Foster wrote:
> > On Thu, Oct 15, 2020 at 10:49:01AM +0100, Christoph Hellwig wrote:
> > > > +iomap_zero_range_skip_uncached(struct inode *inode, loff_t *pos,
> > > > +		loff_t *count, loff_t *written)
> > > > +{
> > > > +	unsigned dirty_offset, bytes = 0;
> > > > +
> > > > +	dirty_offset = page_cache_seek_hole_data(inode, *pos, *count,
> > > > +				SEEK_DATA);
> > > > +	if (dirty_offset == -ENOENT)
> > > > +		bytes = *count;
> > > > +	else if (dirty_offset > *pos)
> > > > +		bytes = dirty_offset - *pos;
> > > > +
> > > > +	if (bytes) {
> > > > +		*pos += bytes;
> > > > +		*count -= bytes;
> > > > +		*written += bytes;
> > > > +	}
> > > 
> > > I find the calling conventions weird.  why not return bytes and
> > > keep the increments/decrements of the three variables in the caller?
> > > 
> > 
> > No particular reason. IIRC I had it both ways and just landed on this.
> > I'd change it, but as mentioned in the patch 1 thread I don't think this
> > patch is sufficient (with or without patch 1) anyways because the page
> > can also have been reclaimed before we get here.
> > 
> 
> Christoph,
> 
> What do you think about introducing behavior specific to
> iomap_truncate_page() to unconditionally write zeroes over unwritten
> extents? AFAICT that addresses the race and was historical XFS behavior
> (via block_truncate_page()) before iomap, so is not without precedent.
> What I'd probably do is bury the caller's did_zero parameter into a new
> internal struct iomap_zero_data to pass down into
> iomap_zero_range_actor(), then extend that structure with a
> 'zero_unwritten' field such that iomap_zero_range_actor() can do this:
> 

Ugh, so the above doesn't quite describe historical behavior.
block_truncate_page() converts an unwritten block if a page exists
(dirty or not), but bails out if a page doesn't exist. We could still do
the above, but if we wanted something more intelligent I think we need
to check for a page before we get the mapping to know whether we can
safely skip an unwritten block or need to write over it. Otherwise if we
check for a page within the actor, we have no way of knowing whether
there was a (possibly dirty) page that had been written back and/or
reclaimed since ->iomap_begin(). If we check for the page first, I think
that the iolock/mmaplock in the truncate path ensures that a page can't
be added before we complete. We might be able to take that further and
check for a dirty || writeback page, but that might be safer as a
separate patch. See the (compile tested only) diff below for an idea of
what I was thinking.

Brian

--- 8< ---

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index bcfc288dba3f..2cdfcff02307 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1000,17 +1000,56 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
 }
 EXPORT_SYMBOL_GPL(iomap_zero_range);
 
+struct iomap_trunc_priv {
+	bool *did_zero;
+	bool has_page;
+};
+
+static loff_t
+iomap_truncate_page_actor(struct inode *inode, loff_t pos, loff_t count,
+		void *data, struct iomap *iomap, struct iomap *srcmap)
+{
+	struct iomap_trunc_priv	*priv = data;
+	unsigned offset;
+	int status;
+
+	if (srcmap->type == IOMAP_HOLE)
+		return count;
+	if (srcmap->type == IOMAP_UNWRITTEN && !priv->has_page)
+		return count;
+
+	offset = offset_in_page(pos);
+	if (IS_DAX(inode))
+		status = dax_iomap_zero(pos, offset, count, iomap);
+	else
+		status = iomap_zero(inode, pos, offset, count, iomap, srcmap);
+	if (status < 0)
+		return status;
+
+	if (priv->did_zero)
+		*priv->did_zero = true;
+	return count;
+}
+
 int
 iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
 		const struct iomap_ops *ops)
 {
+	struct iomap_trunc_priv priv = { .did_zero = did_zero };
 	unsigned int blocksize = i_blocksize(inode);
 	unsigned int off = pos & (blocksize - 1);
+	loff_t ret;
 
 	/* Block boundary? Nothing to do */
 	if (!off)
 		return 0;
-	return iomap_zero_range(inode, pos, blocksize - off, did_zero, ops);
+
+	priv.has_page = filemap_range_has_page(inode->i_mapping, pos, pos);
+	ret = iomap_apply(inode, pos, blocksize - off, IOMAP_ZERO, ops, &priv,
+			  iomap_truncate_page_actor);
+	if (ret <= 0)
+		return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(iomap_truncate_page);
 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux