On Wed, 5 Jun 2024 12:26:20 +0100, fdmanana@xxxxxxxxxx wrote: > From: Filipe Manana <fdmanana@xxxxxxxx> > > The test writes a 128M file and expects to end up with 1024 extents, each > with a size of 128K, which is the maximum size for compressed extents. > Generally this is what happens, but often it's possibly for writeback to > kick in while creating the file (due to memory pressure, or something > calling sync in parallel, etc) which may result in creating more and > smaller extents, which makes the test fail since its golden output > expects exactly 1024 extents with a size of 128K each. > > So to work around run defrag after creating the file, which will ensure > we get only 128K extents in the file. > > Signed-off-by: Filipe Manana <fdmanana@xxxxxxxx> Looks fine. Signed-off-by: David Disseldorp <ddiss@xxxxxxx> > --- > tests/btrfs/280 | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/tests/btrfs/280 b/tests/btrfs/280 > index d4f613ce..0f7f8a37 100755 > --- a/tests/btrfs/280 > +++ b/tests/btrfs/280 > @@ -13,7 +13,7 @@ > # the backref walking code, used by fiemap to determine if an extent is shared. > # > . ./common/preamble > -_begin_fstest auto quick compress snapshot fiemap > +_begin_fstest auto quick compress snapshot fiemap defrag > > . ./common/filter > . ./common/punch # for _filter_fiemap_flags _require_defrag might be worth calling, but it doesn't really do anything for btrfs, so I'm fine either way.