Re: [PATCH 1/8] generic/604: try to make race occur reliably

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 26, 2024 at 06:00:47PM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@xxxxxxxxxx>
> 
> This test will occasionaly fail like so:
> 
> --- /tmp/fstests/tests/generic/604.out	2024-02-03 12:08:52.349924277 -0800
> +++ /var/tmp/fstests/generic/604.out.bad	2024-02-05 04:35:55.020000000 -0800
> @@ -1,2 +1,5 @@
>  QA output created by 604
> -Silence is golden
> +mount: /opt: /dev/sda4 already mounted on /opt.
> +       dmesg(1) may have more information after failed mount system call.
> +mount -o usrquota,grpquota,prjquota, /dev/sda4 /opt failed
> +(see /var/tmp/fstests/generic/604.full for details)
> 
> As far as I can tell, the cause of this seems to be _scratch_mount
> getting forked and exec'd before the backgrounded umount process has a
> chance to enter the kernel.  When this occurs, the mount() system call
> will return -EBUSY because this isn't an attempt to make a bind mount.
> Slow things down slightly by stalling the mount by 10ms.
> 
> Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> ---
>  tests/generic/604 |    7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> 
> diff --git a/tests/generic/604 b/tests/generic/604
> index cc6a4b214f..a0dcdcd58e 100755
> --- a/tests/generic/604
> +++ b/tests/generic/604
> @@ -24,10 +24,11 @@ _scratch_mount
>  for i in $(seq 0 500); do
>  	$XFS_IO_PROG -f -c "pwrite 0 4K" $SCRATCH_MNT/$i >/dev/null
>  done
> -# For overlayfs, avoid unmouting the base fs after _scratch_mount
> -# tries to mount the base fs
> +# For overlayfs, avoid unmouting the base fs after _scratch_mount tries to
> +# mount the base fs.  Delay the mount attempt by 0.1s in the hope that the
> +# mount() call will try to lock s_umount /after/ umount has already taken it.
>  $UMOUNT_PROG $SCRATCH_MNT &
> -_scratch_mount
> +sleep 0.01s ; _scratch_mount

0.1s or 0.01s ? Above comment says 0.1s, but it sleeps 0.01s actually :)

The comment of g/604 says "Evicting dirty inodes can take a long time during
umount." So how long time makes sense, how long is the bug?

Thanks,
Zorro

>  wait
>  
>  echo "Silence is golden"
> 





[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux