Re: Flaky test: generic/085

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 12, 2024 at 03:47:16PM GMT, Theodore Ts'o wrote:
> On Wed, Jun 12, 2024 at 01:25:07PM +0200, Christian Brauner wrote:
> > I've been trying to reproduce this with pmem yesterday and wasn't able to.
> > 
> > What's the kernel config and test config that's used?
> >
> 
> The kernel config can be found here:
> 
> https://github.com/tytso/xfstests-bld/blob/master/kernel-build/kernel-configs/config-6.1
> 
> Drop it into .config in the build directory of any kernel sources
> newer than 6.1, and then run "make olddefconfig".  This is all
> automated in the install-kconfig script which I use:
> 
> https://github.com/tytso/xfstests-bld/blob/master/kernel-build/install-kconfig
> 
> The VM has 4 CPU's, and 26GiB of memory, and kernel is booted with the
> boot command line options "memmap=4G!9G memmap=9G!14G", which sets up
> fake /dev/pmem0 and /dev/pmem1 devices backed by RAM.  This is my poor
> engineer's way of testing DAX without needing to get access to
> expensive VM's with pmem.  :-)
> 
> I'm assuming this is a timing-dependant bug which is easiest to
> trigger on fast devices, so a ramdisk might also work.  FWIW, I also
> can see failures relatively frequently using the ext4/nojournal
> configuration on a SSD-backed cloud block device (GCE's Persistent
> Disk SSD product).
> 
> As a result, if you grab my xfstests-bld repo from github, and then
> run "qemu-xfstests -c ext4/nojournal C 20 generic/085" it should
> also reproduce.  See the Documentation/kvm-quickstart.md for more details.

Thanks, Ted! Ok, I think I figured it out.

P1
dm_resume()
-> bdev_freeze()
   mutex_lock(&bdev->bd_fsfreeze_mutex);
   atomic_inc_return(&bdev->bd_fsfreeze_count); // == 1
   mutex_unlock(&bdev->bd_fsfreeze_mutex);

P2						P3
setup_bdev_super()
bdev_file_open_by_dev();
atomic_read(&bdev->bd_fsfreeze_count); // != 0

						bdev_thaw()
						mutex_lock(&bdev->bd_fsfreeze_mutex);
						atomic_dec_return(&bdev->bd_fsfreeze_count); // == 0
						mutex_unlock(&bdev->bd_fsfreeze_mutex);
						bd_holder_lock();
						// grab passive reference on sb via sb->s_count
						bd_holder_unlock();
						// Sleep to be woken when superblock ready or dead
bdev_fput()
bd_holder_lock()
// yield bdev
bd_holder_unlock()

deactivate_locked_super()
// notify that superblock is dead

						// get woken and see that superblock is dead; fail

In words this basically means that P1 calls dm_suspend() which calls
into bdev_freeze() before the block device has been claimed by the
filesystem. This brings bdev->bd_fsfreeze_count to 1 and no call into
fs_bdev_freeze() is required.

Now P2 tries to mount that frozen block device. It claims it and checks
bdev->bd_fsfreeze_count. As it's elevated it aborts mounting holding
sb->s_umount all the time ofc.

In the meantime P3 calls dm_resume() it sees that the block device is
already claimed by a filesystem and calls into fs_bdev_thaw().

It takes a passive reference and realizes that the filesystem isn't
ready yet. So P3 puts itself to sleep to wait for the filesystem to
become ready.

P2 puts the last active reference to the filesystem and marks it as
dying.

Now P3 gets woken, sees that the filesystem is dying and
get_bdev_super() fails.

So Darrick is correct about the fix but the reasoning is a bit
different. :)

Patch appended and on #vfs.fixes.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux