On Sun, May 24, 2020 at 10:16:12AM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > > When we're estimating a new speculative preallocation length for an > extending write, we should walk backwards through the extent list to > determine the number of number of blocks that are physically and > logically contiguous with the write offset, and use that as an input to > the preallocation size computation. > > This way, preallocation length is truly measured by the effectiveness of > the allocator in giving us contiguous allocations without being > influenced by the state of a given extent. This fixes both the problem > where ZERO_RANGE within an EOF can reduce preallocation, and prevents > the unnecessary shrinkage of preallocation when delalloc extents are > turned into unwritten extents. > > This was found as a regression in xfs/014 after changing delalloc writes > to create unwritten extents during writeback. > > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > --- > v2: multiplication instead of lshift > --- Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx> > fs/xfs/xfs_iomap.c | 40 +++++++++++++++++++++++++++------------- > 1 file changed, 27 insertions(+), 13 deletions(-) > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c > index 7d8966ce630a..e74a8c2c94ce 100644 > --- a/fs/xfs/xfs_iomap.c > +++ b/fs/xfs/xfs_iomap.c > @@ -377,15 +377,17 @@ xfs_iomap_prealloc_size( > loff_t count, > struct xfs_iext_cursor *icur) > { > + struct xfs_iext_cursor ncur = *icur; > + struct xfs_bmbt_irec prev, got; > struct xfs_mount *mp = ip->i_mount; > struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); > xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); > - struct xfs_bmbt_irec prev; > - int shift = 0; > int64_t freesp; > xfs_fsblock_t qblocks; > - int qshift = 0; > xfs_fsblock_t alloc_blocks = 0; > + xfs_extlen_t plen; > + int shift = 0; > + int qshift = 0; > > if (offset + count <= XFS_ISIZE(ip)) > return 0; > @@ -400,7 +402,7 @@ xfs_iomap_prealloc_size( > */ > if ((mp->m_flags & XFS_MOUNT_ALLOCSIZE) || > XFS_ISIZE(ip) < XFS_FSB_TO_B(mp, mp->m_dalign) || > - !xfs_iext_peek_prev_extent(ifp, icur, &prev) || > + !xfs_iext_prev_extent(ifp, &ncur, &prev) || > prev.br_startoff + prev.br_blockcount < offset_fsb) > return mp->m_allocsize_blocks; > > @@ -413,16 +415,28 @@ xfs_iomap_prealloc_size( > * preallocation size. > * > * If the extent is a hole, then preallocation is essentially disabled. > - * Otherwise we take the size of the preceding data extent as the basis > - * for the preallocation size. If the size of the extent is greater than > - * half the maximum extent length, then use the current offset as the > - * basis. This ensures that for large files the preallocation size > - * always extends to MAXEXTLEN rather than falling short due to things > - * like stripe unit/width alignment of real extents. > + * Otherwise we take the size of the preceding data extents as the basis > + * for the preallocation size. Note that we don't care if the previous > + * extents are written or not. > + * > + * If the size of the extents is greater than half the maximum extent > + * length, then use the current offset as the basis. This ensures that > + * for large files the preallocation size always extends to MAXEXTLEN > + * rather than falling short due to things like stripe unit/width > + * alignment of real extents. > */ > - if (prev.br_blockcount <= (MAXEXTLEN >> 1)) > - alloc_blocks = prev.br_blockcount << 1; > - else > + plen = prev.br_blockcount; > + while (xfs_iext_prev_extent(ifp, &ncur, &got)) { > + if (plen > MAXEXTLEN / 2 || > + isnullstartblock(got.br_startblock) || > + got.br_startoff + got.br_blockcount != prev.br_startoff || > + got.br_startblock + got.br_blockcount != prev.br_startblock) > + break; > + plen += got.br_blockcount; > + prev = got; > + } > + alloc_blocks = plen * 2; > + if (alloc_blocks > MAXEXTLEN) > alloc_blocks = XFS_B_TO_FSB(mp, offset); > if (!alloc_blocks) > goto check_writeio; >