xfs_db fails to properly detect the device sector size and thus segfaults when run again an image file with 4k sector size. While that's something we should fix in xfs_db it will require a fair amount of refactoring of the libxfs init code. For now just change shared/298 to run xfs_db against the loop device created on the image file that is used for I/O, which feels like the right thing to do anyway to avoid cache coherency issues. Signed-off-by: Christoph Hellwig <hch@xxxxxx> --- tests/shared/298 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/shared/298 b/tests/shared/298 index 071c03dee..f657578c7 100755 --- a/tests/shared/298 +++ b/tests/shared/298 @@ -69,7 +69,7 @@ get_free_sectors() agsize=`$XFS_INFO_PROG $loop_mnt | $SED_PROG -n 's/.*agsize=\(.*\) blks.*/\1/p'` # Convert free space (agno, block, length) to (start sector, end sector) _umount $loop_mnt - $XFS_DB_PROG -r -c "freesp -d" $img_file | $SED_PROG '/^.*from/,$d'| \ + $XFS_DB_PROG -r -c "freesp -d" $loop_dev | $SED_PROG '/^.*from/,$d'| \ $AWK_PROG -v spb=$sectors_per_block -v agsize=$agsize \ '{ print spb * ($1 * agsize + $2), spb * ($1 * agsize + $2 + $3) - 1 }' ;; -- 2.39.2