I will send another patch to move all the hard coded timeouts to make it configurable.
-ShwethaOn Mon, Aug 28, 2017 at 8:57 AM, Nigel Babu <nigelb@xxxxxxxxxx> wrote:
Shwetha,Is this time out configurable? Or is it hard-coded into the glusto-tests repo?--On Sat, Aug 26, 2017 at 1:59 AM, Shyam Ranganathan <srangana@xxxxxxxxxx> wrote:Nigel was kind enough to kick off a glusto run on 3.12 head a couple of days back. The status can be seen here [1].
The run failed, but managed to get past what Glusto does on master (see [2]). Not that this is a consolation, but just stating the fact.
The run [1] failed at,
17:05:57 functional/bvt/test_cvt.py::TestGlusterHealSanity_dispersed_ glusterfs::test_self_heal_when _io_in_progress FAILED
The test case failed due to,
17:10:28 E AssertionError: ('Volume %s : All process are not online', 'testvol_dispersed')
The test case can be seen here [3], and the reason for failure is that Glusto did not wait long enough for the down brick to come up (it waited for 10 seconds, but the brick came up after 12 seconds or within the same second as the test for it being up. The log snippets pointing to this problem are here [4]. In short there was no real bug or issue that caused the failure as yet.
Glusto as a gating factor for this release was desirable, but having got this far on 3.12 does help.
@nigel, we could try post increasing the timeout between bringing the brick up to checking if it is up, and try another run, let me know if that works, and what is needed from me to get this going.
Shyam
[1] Glusto 3.12 run: https://ci.centos.org/view/Gluster/job/gluster_glusto/365/
[2] Glusto on master: https://ci.centos.org/view/Gluster/job/gluster_glusto/360/te stReport/functional.bvt.test_ cvt/
[3] Failed test case: https://ci.centos.org/view/Gluster/job/gluster_glusto/365/te stReport/functional.bvt.test_ cvt/TestGlusterHealSanity_disp ersed_glusterfs/test_self_heal _when_io_in_progress/
[4] Log analysis pointing to the failed check: https://paste.fedoraproject.org/paste/znTPiFLrc2~vsWuoYRToZA
"Releases are made better together"
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel
nigelb
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel