Re: [PATCH 19/20] xfs: run xfs_repair at the end of each test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 07, 2016 at 09:13:40AM +1000, Dave Chinner wrote:
> On Mon, Jul 04, 2016 at 09:11:34PM -0700, Darrick J. Wong wrote:
> > On Tue, Jul 05, 2016 at 11:56:17AM +0800, Eryu Guan wrote:
> > > On Thu, Jun 16, 2016 at 06:48:01PM -0700, Darrick J. Wong wrote:
> > > > Run xfs_repair twice at the end of each test -- once to rebuild
> > > > the btree indices, and again with -n to check the rebuild work.
> > > > 
> > > > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > > > ---
> > > >  common/rc |    3 +++
> > > >  1 file changed, 3 insertions(+)
> > > > 
> > > > 
> > > > diff --git a/common/rc b/common/rc
> > > > index 1225047..847191e 100644
> > > > --- a/common/rc
> > > > +++ b/common/rc
> > > > @@ -2225,6 +2225,9 @@ _check_xfs_filesystem()
> > > >          ok=0
> > > >      fi
> > > >  
> > > > +    $XFS_REPAIR_PROG $extra_options $extra_log_options $extra_rt_options $device >$tmp.repair 2>&1
> > > > +    cat $tmp.repair | _fix_malloc		>>$seqres.full
> > > > +
> > > 
> > > Won't this hide fs corruptions? Did I miss anything?
> > 
> > I could've sworn it did:
> > 
> > xfs_repair -n
> > (complain if corrupt)
> > 
> > xfs_repair
> > 
> > xfs_repair -n
> > (complain if still corrupt)
> > 
> > But that first xfs_repair -n hunk disappeared. :(
> > 
> > Ok, will fix and resend.
> 
> Not sure this is the best idea - when repair on an aged test device
> takes 10s, this means the test harness overhead increases by a
> factor of 3. i.e. test takes 1s to run, checking the filesystem
> between tests now takes 30s. i.e. this will badly blow out the run
> time of the test suite on aged test devices....
> 
> What does this overhead actually gain us that we couldn't encode
> explicitly into a single test or two? e.g the test itself runs
> repair on the aged test device....

I'm primarily using it as a way to expose the new rmap/refcount/rtrmap btree
rebuilding code to a wider variety of filesystems.  But you're right, there's
no need to expose /everyone/ to this behavior.  Shall I rework the change
so that one can turn it on or off as desired?

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux