Re: [PATCH RFC 5/6] fs: xfs: iomap atomic write support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 24, 2024 at 02:26:44PM +0000, John Garry wrote:
> Ensure that when creating a mapping that we adhere to all the atomic
> write rules.
> 
> We check that the mapping covers the complete range of the write to ensure
> that we'll be just creating a single mapping.
> 
> Currently minimum granularity is the FS block size, but it should be
> possibly to support lower in future.
> 
> Signed-off-by: John Garry <john.g.garry@xxxxxxxxxx>
> ---
> I am setting this as an RFC as I am not sure on the change in
> xfs_iomap_write_direct() - it gives the desired result AFAICS.
> 
>  fs/xfs/xfs_iomap.c | 41 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 18c8f168b153..758dc1c90a42 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -289,6 +289,9 @@ xfs_iomap_write_direct(
>  		}
>  	}
>  
> +	if (xfs_inode_atomicwrites(ip))
> +		bmapi_flags = XFS_BMAPI_ZERO;

Why do we want to write zeroes to the disk if we're allocating space
even if we're not sending an atomic write?

(This might want an explanation for why we're doing this at all -- it's
to avoid unwritten extent conversion, which defeats hardware untorn
writes.)

I think we should support IOCB_ATOMIC when the mapping is unwritten --
the data will land on disk in an untorn fashion, the unwritten extent
conversion on IO completion is itself atomic, and callers still have to
set O_DSYNC to persist anything.  Then we can avoid the cost of
BMAPI_ZERO, because double-writes aren't free.

> +
>  	error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write, dblocks,
>  			rblocks, force, &tp);
>  	if (error)
> @@ -812,6 +815,44 @@ xfs_direct_write_iomap_begin(
>  	if (error)
>  		goto out_unlock;
>  
> +	if (flags & IOMAP_ATOMIC) {
> +		xfs_filblks_t unit_min_fsb, unit_max_fsb;
> +		unsigned int unit_min, unit_max;
> +
> +		xfs_get_atomic_write_attr(ip, &unit_min, &unit_max);
> +		unit_min_fsb = XFS_B_TO_FSBT(mp, unit_min);
> +		unit_max_fsb = XFS_B_TO_FSBT(mp, unit_max);
> +
> +		if (!imap_spans_range(&imap, offset_fsb, end_fsb)) {
> +			error = -EINVAL;
> +			goto out_unlock;
> +		}
> +
> +		if ((offset & mp->m_blockmask) ||
> +		    (length & mp->m_blockmask)) {
> +			error = -EINVAL;
> +			goto out_unlock;
> +		}
> +
> +		if (imap.br_blockcount == unit_min_fsb ||
> +		    imap.br_blockcount == unit_max_fsb) {
> +			/* ok if exactly min or max */
> +		} else if (imap.br_blockcount < unit_min_fsb ||
> +			   imap.br_blockcount > unit_max_fsb) {
> +			error = -EINVAL;
> +			goto out_unlock;
> +		} else if (!is_power_of_2(imap.br_blockcount)) {
> +			error = -EINVAL;
> +			goto out_unlock;
> +		}
> +
> +		if (imap.br_startoff &&
> +		    imap.br_startoff & (imap.br_blockcount - 1)) {

Not sure why we care about the file position, it's br_startblock that
gets passed into the bio, not br_startoff.

I'm also still not convinced that any of this validation is useful here.
The block device stack underneath the filesystem can change at any time
without any particular notice to the fs, so the only way to find out if
the proposed IO would meet the alignment constraints is to submit_bio
and see what happens.

(The "one bio per untorn write request" thing in the direct-io.c patch
sound sane to me though.)

--D

> +			error = -EINVAL;
> +			goto out_unlock;
> +		}
> +	}
> +
>  	if (imap_needs_cow(ip, flags, &imap, nimaps)) {
>  		error = -EAGAIN;
>  		if (flags & IOMAP_NOWAIT)
> -- 
> 2.31.1
> 
> 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux