Re: [PATCH v1] xfs: fix sb write verify for lazysbcount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Oct 23, 2022 at 09:07:42PM -0700, Darrick J. Wong wrote:
> On Sun, Oct 23, 2022 at 08:16:13AM +1100, Dave Chinner wrote:
> > On Sat, Oct 22, 2022 at 08:01:25PM +0800, Long Li wrote:
> > > On Fri, Oct 21, 2022 at 07:14:28PM -0700, Darrick J. Wong wrote:
> > > > On Sat, Oct 22, 2022 at 10:03:45AM +0800, Long Li wrote:
> > > > > When lazysbcount is enabled, multiple threads stress test the xfs report
> > > > > the following problems:
> > 
> > We've had lazy sb counters for 15 years and just about every XFS
> > filesystem in production uses them, so providing us with some idea
> > of the scope of the problem and how to reproduce it would be greatly
> > appreciated.
> > 
> > What stress test are you running? What filesystem config does it
> > manifest on (other than lazysbcount=1)?  How long does the stress
> > test run for, and where/why does log recovery get run in this stress
> > test?
> > 
> > > > > XFS (loop0): SB summary counter sanity check failed
> > > > > XFS (loop0): Metadata corruption detected at xfs_sb_write_verify
> > > > > 	     +0x13b/0x460, xfs_sb block 0x0
> > > > > XFS (loop0): Unmount and run xfs_repair
> > > > > XFS (loop0): First 128 bytes of corrupted metadata buffer:
> > > > > 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 28 00 00  XFSB.........(..
> > > > > 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > > > > 00000020: 69 fb 7c cd 5f dc 44 af 85 74 e0 cc d4 e3 34 5a  i.|._.D..t....4Z
> > > > > 00000030: 00 00 00 00 00 20 00 06 00 00 00 00 00 00 00 80  ..... ..........
> > > > > 00000040: 00 00 00 00 00 00 00 81 00 00 00 00 00 00 00 82  ................
> > > > > 00000050: 00 00 00 01 00 0a 00 00 00 00 00 04 00 00 00 00  ................
> > > > > 00000060: 00 00 0a 00 b4 b5 02 00 02 00 00 08 00 00 00 00  ................
> > > > > 00000070: 00 00 00 00 00 00 00 00 0c 09 09 03 14 00 00 19  ................
> > > > > XFS (loop0): Corruption of in-memory data (0x8) detected at _xfs_buf_ioapply
> > > > > 	+0xe1e/0x10e0 (fs/xfs/xfs_buf.c:1580).  Shutting down filesystem.
> > > > > XFS (loop0): Please unmount the filesystem and rectify the problem(s)
> > > > > XFS (loop0): log mount/recovery failed: error -117
> > > > > XFS (loop0): log mount failed
> > > > > 
> > > > > The cause of the problem is that during the log recovery process, incorrect
> > > > > icount and ifree are recovered from the log and fail to pass the size check
> > > > 
> > > > Are you saying that the log contained a transaction in which ifree >
> > > > icount?
> > > 
> > > Yes, this situation is possible. For example consider the following sequence:
> > > 
> > >  CPU0				    CPU1
> > >  xfs_log_sb			    xfs_trans_unreserve_and_mod_sb
> > >  ----------			    ------------------------------
> > >  percpu_counter_sum(&mp->m_icount)
> > > 				    percpu_counter_add(&mp->m_icount, idelta)
> > > 				    percpu_counter_add_batch(&mp->m_icount,
> > > 						  idelta, XFS_ICOUNT_BATCH)
> > >  percpu_counter_sum(&mp->m_ifree)
> > 
> > What caused the xfs_log_sb() to be called? Very few things
> > actually log the superblock this way at runtime - it's generally
> > only logged directly like this when a feature bit changes during a
> > transaction (rare) or at a synchronisation point when everything
> > else is idle and there's no chance of a race like this occurring...
> > 
> > I can see a couple of routes to this occurring via feature bit
> > modification, but I don't see them being easy to hit or something
> > that would exist for very long in the journal. Hence I'm wondering
> > if there should be runtime protection for xfs_log_sb() to avoid
> > these problems....
> 
> Maybe.  Or perhaps we sample m_i{count,free} until they come up with a
> value that will pass the verifier, and only then log the new values to
> the primary super xfs_buf?

I suspect the simplest thing to do is this:

	mp->m_sb.sb_ifree = min_t(uint64_t, percpu_counter_sum(&mp->m_ifree),
				mp->m_sb.sb.icount);

That way ifree will never be logged as being greater than icount.
Neither icount or ifree will be accurate if we are racing with other
updates, but it will guarantee that what we write to the journal
won't trigger corruption warnings.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux