Re: ./tests/basic/uss.t is timing out in release-6 branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




To make things relatively easy for the cleanup () function in the test framework, I think it would be better to ensure that uss.t itself deletes snapshots and the volume once the tests are done. Patch [1] has been submitted for review.

[1] https://review.gluster.org/#/c/glusterfs/+/22649/

Regards,
Raghavendra

On Tue, Apr 30, 2019 at 10:42 AM FNU Raghavendra Manjunath <rabhat@xxxxxxxxxx> wrote:

The failure looks similar to the issue I had mentioned in [1]

In short for some reason the cleanup (the cleanup function that we call in our .t files) seems to be taking more time and also not cleaning up properly. This leads to problems for the 2nd iteration (where basic things
such as volume creation or volume start itself fails due to ENODATA or ENOENT errors).

The 2nd iteration of the uss.t ran had the following errors.

"[2019-04-29 09:08:15.275773]:++++++++++ G_LOG:./tests/basic/uss.t: TEST: 39 gluster --mode=script --wignore volume set patchy nfs.disable false ++++++++++
[2019-04-29 09:08:15.390550]  : volume set patchy nfs.disable false : SUCCESS
[2019-04-29 09:08:15.404624]:++++++++++ G_LOG:./tests/basic/uss.t: TEST: 42 gluster --mode=script --wignore volume start patchy ++++++++++
[2019-04-29 09:08:15.468780]  : volume start patchy : FAILED : Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /d/backends/3/patchy_snap_mnt. Reason : No data available
"

These are the initial steps to create and start volume. Why trusted.glusterfs.volume-id extended attribute is absent is not sure. The analysis in [1] had errors of ENOENT (i.e. export directory itself was absent).
I suspect this to be because of some issue with the cleanup mechanism at the end of the tests.


On Tue, Apr 30, 2019 at 8:37 AM Sanju Rakonde <srakonde@xxxxxxxxxx> wrote:
Hi Raghavendra,

./tests/basic/uss.t is timing out in release-6 branch consistently. One such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can you please look into this?

--
Thanks,
Sanju
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux