From: Darrick J. Wong <djwong@xxxxxxxxxx> Generally speaking, when a user calls fallocate, they're looking to preallocate space in a file in the largest contiguous chunks possible. If free space is low, it's possible that the free space will look unnecessarily fragmented because there are unlinked inodes that are holding on to space that we could allocate. When this happens, fallocate makes suboptimal allocation decisions for the sake of deleted files, which doesn't make much sense, so scan the filesystem for dead items to delete to try to avoid this. Note that there are a handful of fstests that fill a filesystem, delete just enough files to allow a single large allocation, and check that fallocate actually gets the allocation. These tests regress because the test runs fallocate before the inode gc has a chance to run, so add this behavior to maintain as much of the old behavior as possible. Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> --- fs/xfs/xfs_bmap_util.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 21aa38183ae9..6d2fece45bdc 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -28,6 +28,7 @@ #include "xfs_icache.h" #include "xfs_iomap.h" #include "xfs_reflink.h" +#include "xfs_sb.h" /* Kernel only BMAP related definitions and functions */ @@ -733,6 +734,44 @@ xfs_free_eofblocks( return error; } +/* + * If we suspect that the target device is full enough that it isn't to be able + * to satisfy the entire request, try a non-sync inode inactivation scan to + * free up space. While it's perfectly fine to fill a preallocation request + * with a bunch of short extents, we'd prefer to do the inactivation work now + * to combat long term fragmentation in new file data. This is purely for + * optimization, so we don't take any blocking locks and we only look for space + * that is already on the reclaim list (i.e. we don't zap speculative + * preallocations). + */ +static int +xfs_alloc_reclaim_inactive_space( + struct xfs_mount *mp, + bool is_rt, + xfs_filblks_t allocatesize_fsb) +{ + struct xfs_perag *pag; + struct xfs_sb *sbp = &mp->m_sb; + xfs_extlen_t free; + xfs_agnumber_t agno; + + if (is_rt) { + if (sbp->sb_frextents * sbp->sb_rextsize >= allocatesize_fsb) + return 0; + } else { + for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) { + pag = xfs_perag_get(mp, agno); + free = pag->pagf_freeblks; + xfs_perag_put(pag); + + if (free >= allocatesize_fsb) + return 0; + } + } + + return xfs_inodegc_free_space(mp, NULL); +} + int xfs_alloc_file_space( struct xfs_inode *ip, @@ -817,6 +856,11 @@ xfs_alloc_file_space( rblocks = 0; } + error = xfs_alloc_reclaim_inactive_space(mp, rt, + allocatesize_fsb); + if (error) + break; + /* * Allocate and setup the transaction. */