This is a note to let you know that I've just added the patch titled xfs: use byte ranges for write cleanup ranges to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: xfs-use-byte-ranges-for-write-cleanup-ranges.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable+bounces-42892-greg=kroah.com@xxxxxxxxxxxxxxx Wed May 1 20:41:29 2024 From: Leah Rumancik <leah.rumancik@xxxxxxxxx> Date: Wed, 1 May 2024 11:40:51 -0700 Subject: xfs: use byte ranges for write cleanup ranges To: stable@xxxxxxxxxxxxxxx Cc: linux-xfs@xxxxxxxxxxxxxxx, amir73il@xxxxxxxxx, chandan.babu@xxxxxxxxxx, fred@xxxxxxxxxxxxxx, Dave Chinner <dchinner@xxxxxxxxxx>, "Darrick J . Wong" <djwong@xxxxxxxxxx>, Leah Rumancik <leah.rumancik@xxxxxxxxx> Message-ID: <20240501184112.3799035-3-leah.rumancik@xxxxxxxxx> From: Dave Chinner <dchinner@xxxxxxxxxx> [ Upstream commit b71f889c18ada210a97aa3eb5e00c0de552234c6 ] xfs_buffered_write_iomap_end() currently converts the byte ranges passed to it to filesystem blocks to pass them to the bmap code to punch out delalloc blocks, but then has to convert filesytem blocks back to byte ranges for page cache truncate. We're about to make the page cache truncate go away and replace it with a page cache walk, so having to convert everything to/from/to filesystem blocks is messy and error-prone. It is much easier to pass around byte ranges and convert to page indexes and/or filesystem blocks only where those units are needed. In preparation for the page cache walk being added, add a helper that converts byte ranges to filesystem blocks and calls xfs_bmap_punch_delalloc_range() and convert xfs_buffered_write_iomap_end() to calculate limits in byte ranges. Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> Signed-off-by: Leah Rumancik <leah.rumancik@xxxxxxxxx> Acked-by: Darrick J. Wong <djwong@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/xfs/xfs_iomap.c | 40 +++++++++++++++++++++++++--------------- 1 file changed, 25 insertions(+), 15 deletions(-) --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1121,6 +1121,20 @@ out_unlock: } static int +xfs_buffered_write_delalloc_punch( + struct inode *inode, + loff_t start_byte, + loff_t end_byte) +{ + struct xfs_mount *mp = XFS_M(inode->i_sb); + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte); + xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte); + + return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb, + end_fsb - start_fsb); +} + +static int xfs_buffered_write_iomap_end( struct inode *inode, loff_t offset, @@ -1129,10 +1143,9 @@ xfs_buffered_write_iomap_end( unsigned flags, struct iomap *iomap) { - struct xfs_inode *ip = XFS_I(inode); - struct xfs_mount *mp = ip->i_mount; - xfs_fileoff_t start_fsb; - xfs_fileoff_t end_fsb; + struct xfs_mount *mp = XFS_M(inode->i_sb); + loff_t start_byte; + loff_t end_byte; int error = 0; if (iomap->type != IOMAP_DELALLOC) @@ -1157,13 +1170,13 @@ xfs_buffered_write_iomap_end( * the range. */ if (unlikely(!written)) - start_fsb = XFS_B_TO_FSBT(mp, offset); + start_byte = round_down(offset, mp->m_sb.sb_blocksize); else - start_fsb = XFS_B_TO_FSB(mp, offset + written); - end_fsb = XFS_B_TO_FSB(mp, offset + length); + start_byte = round_up(offset + written, mp->m_sb.sb_blocksize); + end_byte = round_up(offset + length, mp->m_sb.sb_blocksize); /* Nothing to do if we've written the entire delalloc extent */ - if (start_fsb >= end_fsb) + if (start_byte >= end_byte) return 0; /* @@ -1173,15 +1186,12 @@ xfs_buffered_write_iomap_end( * leave dirty pages with no space reservation in the cache. */ filemap_invalidate_lock(inode->i_mapping); - truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb), - XFS_FSB_TO_B(mp, end_fsb) - 1); - - error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - end_fsb - start_fsb); + truncate_pagecache_range(inode, start_byte, end_byte - 1); + error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte); filemap_invalidate_unlock(inode->i_mapping); if (error && !xfs_is_shutdown(mp)) { - xfs_alert(mp, "%s: unable to clean up ino %lld", - __func__, ip->i_ino); + xfs_alert(mp, "%s: unable to clean up ino 0x%llx", + __func__, XFS_I(inode)->i_ino); return error; } return 0; Patches currently in stable-queue which might be from kroah.com@xxxxxxxxxxxxxxx are queue-6.1/xfs-iomap-move-delalloc-punching-to-iomap.patch queue-6.1/xfs-fix-off-by-one-block-in-xfs_discard_folio.patch queue-6.1/xfs-invalidate-block-device-page-cache-during-unmount.patch queue-6.1/xfs-drop-write-error-injection-is-unfixable-remove-it.patch queue-6.1/iomap-buffered-write-failure-should-not-truncate-the-page-cache.patch queue-6.1/xfs-fix-super-block-buf-log-item-uaf-during-force-shutdown.patch queue-6.1/xfs-fix-incorrect-i_nlink-caused-by-inode-racing.patch queue-6.1/xfs-estimate-post-merge-refcounts-correctly.patch queue-6.1/xfs-fix-log-recovery-when-unknown-rocompat-bits-are-set.patch queue-6.1/xfs-punching-delalloc-extents-on-write-failure-is-racy.patch queue-6.1/xfs-allow-inode-inactivation-during-a-ro-mount-log-recovery.patch queue-6.1/iomap-write-iomap-validity-checks.patch queue-6.1/xfs-attach-dquots-to-inode-before-reading-data-cow-fork-mappings.patch queue-6.1/xfs-fix-sb-write-verify-for-lazysbcount.patch queue-6.1/xfs-wait-iclog-complete-before-tearing-down-ail.patch queue-6.1/xfs-use-byte-ranges-for-write-cleanup-ranges.patch queue-6.1/xfs-xfs_bmap_punch_delalloc_range-should-take-a-byte-range.patch queue-6.1/xfs-write-page-faults-in-iomap-are-not-buffered-writes.patch queue-6.1/xfs-short-circuit-xfs_growfs_data_private-if-delta-is-zero.patch queue-6.1/xfs-fix-incorrect-error-out-in-xfs_remove.patch queue-6.1/xfs-invalidate-xfs_bufs-when-allocating-cow-extents.patch queue-6.1/xfs-hoist-refcount-record-merge-predicates.patch queue-6.1/xfs-get-root-inode-correctly-at-bulkstat.patch queue-6.1/xfs-use-iomap_valid-method-to-detect-stale-cached-iomaps.patch