From: Zhang Yi <yi.zhang@xxxxxxxxxx> When unaligned truncating down a realtime file which sb_rextsize is bigger than one block, xfs_truncate_page() only zeros out the tail EOF block, this could expose stale data since commit '943bc0882ceb ("iomap: don't increase i_size if it's not a write operation")'. If we truncate file that contains a large enough written extent: |< rxext >|< rtext >| ...WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW ^ (new EOF) ^ old EOF Since we only zeros out the tail of the EOF block, and xfs_itruncate_extents() unmap the whole ailgned extents, it becomes this state: |< rxext >| ...WWWzWWWWWWWWWWWWW ^ new EOF Then if we do an extending write like this, the blocks in the previous tail extent becomes stale: |< rxext >| ...WWWzSSSSSSSSSSSSS..........WWWWWWWWWWWWWWWWW ^ old EOF ^ append start ^ new EOF Fix this by zeroing out the tail allocation uint and also make sure xfs_itruncate_extents() unmap allocation uint aligned extents. Signed-off-by: Zhang Yi <yi.zhang@xxxxxxxxxx> --- fs/xfs/xfs_inode.c | 3 ++- fs/xfs/xfs_iops.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 58fb7a5062e1..92daa2279053 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -1511,7 +1511,8 @@ xfs_itruncate_extents_flags( * We have to free all the blocks to the bmbt maximum offset, even if * the page cache can't scale that far. */ - first_unmap_block = XFS_B_TO_FSB(mp, (xfs_ufsize_t)new_size); + first_unmap_block = XFS_B_TO_FSB(mp, + roundup_64(new_size, xfs_inode_alloc_unitsize(ip))); if (!xfs_verify_fileoff(mp, first_unmap_block)) { WARN_ON_ONCE(first_unmap_block > XFS_MAX_FILEOFF); return 0; diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index 0919a42cceb6..8e7e6c435fb3 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -858,7 +858,7 @@ xfs_setattr_truncate_data( } /* Truncate down */ - blocksize = i_blocksize(inode); + blocksize = xfs_inode_alloc_unitsize(ip); /* * iomap won't detect a dirty page over an unwritten block (or a cow -- 2.39.2