Re: Delivery Status Notification (Failure)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



> > Can you try adding _check_scratch_fs after each test case?  Yes, it
>
> _check_scratch_fs now runs xfs_scrub on XFS as well as xfs_repair,
> so it's actually quite expensive.
>
> The whole point of aggregating all these tests into one fstest is to
> avoid the overhead of running _check_scratch_fs after every single
> test that are /extremely unlikely/ to fail on existing filesystems.


Filipe and Eryu suggest that we run _check_scratch_fs after each
subtest. Quoting Filipe,


>
> For this type of tests, I think it's a good idea to let fsck run.
>
> Even if all of the links are persisted, the log/journal replay might
> have caused metadata inconsistencies in the fs for example - this was
> true for many cases I fixed over the years in btrfs.
> Even if fsck doesn't report any problem now, it's still good to run
> it, to help prevent future regressions.
>
> Plus this test creates a very small fs, it's not like fsck will take a
> significant time to run.
> So for all these reasons I would unmount and fsck after each test.


For this reason, we currently _check_scratch_fs after each subtest in
the _check_consistency method in my patch.
+       _unmount_flakey
+       _check_scratch_fs $FLAKEY_DEV
+       [ $? -ne 0 ] && _fatal "fsck failed"

Running on a 200MB partition, addition of this check added only around
3-4 seconds of delay in total for this patch consisting of 37 tests.
Currently this patch takes about 12-15 seconds to run to completion on
my 200MB partition.
Am I missing something - is this the check you are talking about?

Thanks,
Jayashree Mohan



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux