<RCA>
Fix submitted at: https://review.gluster.org/#/c/glusterfs/+/20710/Earlier this test did following things on M0 and M1 mounted on same volume: 1 create file M0/testfile 2 open an fd on M0/testfile 3 remove the file from M1, M1/testfile 4 echo "data" >> M0/testfile The test expects appending data to M0/testfile to fail. However, redirector ">>" creates a file if it doesn't exist. So, the only reason test succeeded was due to lookup succeeding due to stale stat in md-cache. This hypothesis is verified by two experiments: * Add a sleep of 10 seconds before append operation. md-cache cache expires and lookup fails followed by creation of file and hence append succeeds to new file. * set md-cache timeout to 600 seconds and test never fails even with sleep 10 before append operation. Reason is stale stat in md-cache survives sleep 10. So, the spurious nature of failure was dependent on whether lookup is done when stat is present in md-cache or not. The actual test should've been to write to the fd opened in step 2 above. I've changed the test accordingly. Note that this patch also remounts M0 after initial file creation as open-behind disables opening-behind on witnessing a setattr on the inode and touch involves a setattr. On remount, create operation is not done and hence file is opened-behind.</RCA>
On Mon, Aug 13, 2018 at 6:12 AM, Shyam Ranganathan <srangana@xxxxxxxxxx> wrote:
As a means of keeping the focus going and squashing the remaining tests
that were failing sporadically, request each test/component owner to,
- respond to this mail changing the subject (testname.t) to the test
name that they are responding to (adding more than one in case they have
the same RCA)
- with the current RCA and status of the same
List of tests and current owners as per the spreadsheet that we were
tracking are:
./tests/basic/distribute/rebal-all-nodes-migrate.t TBD
./tests/basic/tier/tier-heald.t TBD
./tests/basic/afr/sparse-file-self-heal.t TBD
./tests/bugs/shard/bug-1251824.t TBD
./tests/bugs/shard/configure-lru-limit.t TBD
./tests/bugs/replicate/bug-1408712.t Ravi
./tests/basic/afr/replace-brick-self-heal.t TBD
./tests/00-geo-rep/00-georep-verify-setup.t Kotresh
./tests/basic/afr/gfid-mismatch-resolution-with-fav- child-policy.t Karthik
./tests/basic/stats-dump.t TBD
./tests/bugs/bug-1110262.t TBD
./tests/basic/ec/ec-data-heal.t Mohit
./tests/bugs/replicate/bug-1448804-check-quorum-type- values.t Pranith
./tests/bugs/snapshot/bug-1482023-snpashot-issue-with- other-processes-accessing- mounted-path.t
TBD
./tests/basic/ec/ec-5-2.t Sunil
./tests/bugs/shard/bug-shard-discard.t TBD
./tests/bugs/glusterd/remove-brick-testcases.t TBD
./tests/bugs/protocol/bug-808400-repl.t TBD
./tests/bugs/quick-read/bug-846240.t Du
./tests/bugs/replicate/bug-1290965-detect-bitrotten- objects.t Mohit
./tests/00-geo-rep/georep-basic-dr-tarssh.t Kotresh
./tests/bugs/ec/bug-1236065.t Pranith
./tests/00-geo-rep/georep-basic-dr-rsync.t Kotresh
./tests/basic/ec/ec-1468261.t Ashish
./tests/basic/afr/add-brick-self-heal.t Ravi
./tests/basic/afr/granular-esh/replace-brick.t Pranith
./tests/bugs/core/multiplex-limit-issue-151.t Sanju
./tests/bugs/glusterd/validating-server-quorum.t Atin
./tests/bugs/replicate/bug-1363721.t Ravi
./tests/bugs/index/bug-1559004-EMLINK-handling.t Pranith
./tests/bugs/replicate/bug-1433571-undo-pending-only-on- up-bricks.t Karthik
./tests/bugs/glusterd/add-brick-and-validate-replicated- volume-options.t
Atin
./tests/bugs/glusterd/rebalance-operations-in- single-node.t TBD
./tests/bugs/replicate/bug-1386188-sbrain-fav-child.t TBD
./tests/bitrot/bug-1373520.t Kotresh
./tests/bugs/distribute/bug-1117851.t Shyam/Nigel
./tests/bugs/glusterd/quorum-validation.t Atin
./tests/bugs/distribute/bug-1042725.t Shyam
./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry- txn-on-quorum-failure.t
Karthik
./tests/bugs/quota/bug-1293601.t TBD
./tests/bugs/bug-1368312.t Du
./tests/bugs/distribute/bug-1122443.t Du
./tests/bugs/core/bug-1432542-mpx-restart-crash.t 1608568 Nithya/Shyam
Thanks,
Shyam
_______________________________________________
maintainers mailing list
maintainers@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/maintainers
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel