Re: [PATCH 3/3] xfs: prevent mount and log shutdown race

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 12, 2024 at 03:58:08PM -0800, Darrick J. Wong wrote:
> On Wed, Nov 13, 2024 at 09:05:16AM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > I recently had an fstests hang where there were two internal tasks
> > stuck like so:
....
> > For the CIL to be doing shutdown processing, the log must be marked
> > with XLOG_IO_ERROR, but that doesn't happen until after the log
> > force is issued. Hence for xfs_do_force_shutdown() to be forcing
> > the log on a shut down log, we must have had a racing
> > xlog_force_shutdown and xfs_force_shutdown like so:
> > 
> > p0			p1			CIL push
> > 
> >    			<holds buffer lock>
> > xlog_force_shutdown
> >   xfs_log_force
> >    test_and_set_bit(XLOG_IO_ERROR)
> >    						xlog_state_release_iclog()
> > 						  sees XLOG_IO_ERROR
> > 						  xlog_state_shutdown_callbacks
> > 						    ....
> > 						    xfs_buf_item_unpin
> > 						    xfs_buf_lock
> > 						    <blocks on buffer p1 holds>
> > 
> >    			xfs_force_shutdown
> > 			  xfs_set_shutdown(mp) wins
> > 			    xlog_force_shutdown
> > 			      xfs_log_force
> > 			        <blocks on CIL push>
> > 
> >   xfs_set_shutdown(mp) fails
> >   <shuts down rest of log>
> 
> Huh.  I wonder, what happens today if there are multiple threads all
> trying to shut down the log?  Same thing?

Yes. Anywhere that a both a log shutdown and a mount shutdown can be
called concurrently and one of them holds a locked buffer that is
also dirty in the CIL can trip over this. When I first saw it I
thought "calling shutdown with a locked buffer is bad", then
realised that we do that -everywhere- and assume it is safe to do
so. That's when I started looking deeper and found this....

> I guess we've never really
> gone Farmer Brown's Bad Day[1] on this part of the fs.

Oh, running ~64 individual fstests concurrently on the same VM does
a good imitation of that.

$ time sudo ./check-parallel /mnt/xfs -s xfs -x dump
Tests run: 1143
Failure count: 11

real    9m12.307s
user    0m0.007s
sys     0m0.013s
$

That's what's finding these little weird timing-related issues. I've got
several other repeating issues that I haven't got to the bottom of
yet, so Farmer Brown's Bad Day is not over yet...

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux