[PATCH v4] tests/generic: check log recovery with readonly mount

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



And followed by a rw mount. After log recovery by these 2 mount, the
filesystem should be in a consistent state.

Suggested-by:  Donald Douwsma <ddouwsma@xxxxxxxxxx>
Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx>
Signed-off-by: Murphy Zhou <xzhou@xxxxxxxxxx>
---

Thanks Eryu and Zorro!

v2:
   Add explanation of the issue
   add xfs_force_bdev data $SCRATCH_MNT
   use DF_PROG
   Re numbered this test
v3:
   Add _require_scratch_shutdown
   Use _get_available_space
   Explain why does not use _scratch_mount
v4:
   Add to recoveryloop group
   Move to generic as there are no xfs specific operations
   Remove all operations after 2 cycle mounts, let the fsck in fstests
to check the filesystem consistency
   Use _scratch_shutdown, MOUNT_PROG
   Remove unnecessary comments

 tests/generic/999     | 45 +++++++++++++++++++++++++++++++++++++++++++
 tests/generic/999.out |  2 ++
 2 files changed, 47 insertions(+)
 create mode 100755 tests/generic/999
 create mode 100644 tests/generic/999.out

diff --git a/tests/generic/999 b/tests/generic/999
new file mode 100755
index 00000000..9685488b
--- /dev/null
+++ b/tests/generic/999
@@ -0,0 +1,45 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2021 RedHat All Rights Reserved.
+#
+# FS QA Test 999
+#
+# Testcase for kernel commit:
+#   50d25484bebe xfs: sync lazy sb accounting on quiesce of read-only mounts
+#
+# After shutdown and readonly mount, a following read-write mount would
+# get wrong number of available blocks. This is caused by unmounting the log
+# on a readonly filesystem doesn't log the sb counters.
+#
+. ./common/preamble
+_begin_fstest auto quick recoveryloop shutdown
+
+# real QA test starts here
+
+_require_scratch
+_require_scratch_shutdown
+_scratch_mkfs > $seqres.full 2>&1
+
+# Don't use _scratch_mount because we need to mount without SELinux context
+# to reproduce this issue. If we mount with _scratch_mount, SELinux context
+# maybe applied and this testcase is not reproducing the original issue.
+$MOUNT_PROG $SCRATCH_DEV $SCRATCH_MNT
+_xfs_force_bdev data $SCRATCH_MNT
+
+echo Testing > $SCRATCH_MNT/testfile
+
+# -f is required to reproduce
+_scratch_shutdown -f
+
+_scratch_cycle_mount ro
+_scratch_cycle_mount
+
+# These 2 mount should have the log fully recovered. Exit here and let the
+# fsck operation to check the consistence of the tested filesystem. On the
+# buggy kernel, this testcase reports filesystem is in inconsistent state.
+# On the fixed kernel, testcase pass.
+
+# success, all done
+echo "Silence is golden"
+status=0
+exit
diff --git a/tests/generic/999.out b/tests/generic/999.out
new file mode 100644
index 00000000..3b276ca8
--- /dev/null
+++ b/tests/generic/999.out
@@ -0,0 +1,2 @@
+QA output created by 999
+Silence is golden
-- 
2.20.1




[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux