xfs_bmap_del_extent() handles extent removal from the in-core and on-disk extent lists. When removing a delalloc range, it updates the indirect block reservation appropriately based on the removal. It currently enforces that the new indirect block reservation is less than or equal to the original. This is normally the case in all situations except for in certain cases when the removed range creates a hole in a single delalloc extent, thus splitting a single delalloc extent in two. It is possible with small enough extents to split an indlen==1 extent into two such slightly smaller extents. This leaves one extent with 0 indirect blocks and leads to assert failures in other areas (e.g., xfs_bunmapi() if the extent happens to be removed). Refactor the indirect reservation code into a new helper function invoked by xfs_bunmapi(). Allow the helper to steal blocks from the deleted extent if necessary to satisfy the worst case indirect reservation of the new extents. This is safe now that the caller does not update icsb counters until the extent is removed (such stolen blocks simply remain accounted as allocated). Fall back to the original reservation using the existing algorithm and warn if we end up with extents without any reservation at all to detect this more easily in the future. This allows generic/033 to run on XFS without assert failures. Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> --- fs/xfs/libxfs/xfs_bmap.c | 100 ++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 81 insertions(+), 19 deletions(-) diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 5abd22b..64b88b6 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -4701,6 +4701,68 @@ error0: } /* + * When a delalloc extent is split (e.g., due to a hole punch), the original + * indlen reservation must be shared across the two new extents that are left + * behind. + * + * Provided the worst case indlen for two extents, limit the total reservation + * against that of the original extent. We may want to steal blocks from any + * extra that might be available according to the caller to make up a deficiency + * (e.g., if the original reservation consists of one block). The number of such + * stolen blocks is returned. The availability and accounting of stealable + * blocks is the responsibility of the caller. + */ +static xfs_filblks_t +xfs_bmap_limit_indlen( + xfs_filblks_t ores, /* original res. */ + xfs_filblks_t *indlen1, /* ext1 worst indlen */ + xfs_filblks_t *indlen2, /* ext2 worst indlen */ + xfs_filblks_t avail) /* stealable blocks */ +{ + xfs_filblks_t nres; /* new total res. */ + xfs_filblks_t temp; + xfs_filblks_t temp2; + xfs_filblks_t stolen = 0; + + temp = *indlen1; + temp2 = *indlen2; + nres = temp + temp2; + + /* + * Steal as many blocks as we can to try and satisfy the worst case + * indlen of both new extents. + */ + while (nres > ores && avail) { + nres--; + avail--; + stolen++; + } + + /* + * If we've exhausted stealable blocks and still haven't satisfied the + * worst case reservation, we have no choice but to pare back until + * covered by the original reservation. + */ + while (nres > ores) { + if (temp) { + temp--; + nres--; + } + if (nres == ores) + break; + if (temp2) { + temp2--; + nres--; + } + } + + *indlen1 = temp; + *indlen2 = temp2; + + return stolen; +} + +/* * Called by xfs_bmapi to update file extent records and the btree * after removing space (or undoing a delayed allocation). */ @@ -4963,28 +5025,28 @@ xfs_bmap_del_extent( XFS_IFORK_NEXT_SET(ip, whichfork, XFS_IFORK_NEXTENTS(ip, whichfork) + 1); } else { + xfs_filblks_t stolen; ASSERT(whichfork == XFS_DATA_FORK); - temp = xfs_bmap_worst_indlen(ip, temp); + + /* + * Fix up the indlen reservations. We can safely steal + * blocks from the deleted extent as this simply fudges + * the icsb fdblocks accounting in xfs_bunmapi(). + */ + temp = xfs_bmap_worst_indlen(ip, got.br_blockcount); + temp2 = xfs_bmap_worst_indlen(ip, new.br_blockcount); + stolen = xfs_bmap_limit_indlen(da_old, &temp, &temp2, + del->br_blockcount); + da_new = temp + temp2 - stolen; + del->br_blockcount -= stolen; + + /* + * Set the reservation for each extent. Warn if either + * is zero as this can lead to delalloc problems. + */ + WARN_ON_ONCE(!temp || !temp2); xfs_bmbt_set_startblock(ep, nullstartblock((int)temp)); - temp2 = xfs_bmap_worst_indlen(ip, temp2); new.br_startblock = nullstartblock((int)temp2); - da_new = temp + temp2; - while (da_new > da_old) { - if (temp) { - temp--; - da_new--; - xfs_bmbt_set_startblock(ep, - nullstartblock((int)temp)); - } - if (da_new == da_old) - break; - if (temp2) { - temp2--; - da_new--; - new.br_startblock = - nullstartblock((int)temp2); - } - } } trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_); xfs_iext_insert(ip, *idx + 1, 1, &new, state); -- 1.8.3.1 _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs