On 07/05/2014, at 8:40 PM, Luis Pabon wrote: > I agree that this is a major issue. Justin and I for a while tried to build the regressions on different VMs (other than build.gluster.org). I was never successull in running the regression on either CentOS 6.5 or Fedora. Once we are able to run them on any VM, we can then parallelize (is that a word) the regression workload over many (N) VMs. There is working Python code (on the Forge) which kicks off the regression tests in Rackspace. It's been a learning process, so the code is becoming a bit messy now and could do with a refactor... but it's functional. https://forge.gluster.org/glusterfs-rackspace-regression-tester It needs a bit more work before we can use it for automated testing: * At the moment it only compiles git HEAD for a given branch. eg master, release-3.4, release-3.5, etc I need to update the code so it can be passed a Change Set #, which it then uses to grab the right branch + proposed patch from gerrit, tests them. * Then need to hook it up to Jenkins This I haven't investigated, apart from knowing we can call a script through Jenkins to kick things off. As per existing regression test kick off script. We shoul also get this BZ fixed, as it impacts the regression tests from another direction: https://bugzilla.redhat.com/show_bug.cgi?id=1084175 To workaround this problem, keeping the regression test hostnames very short is working (eg "jc0"). Otherwise the "volume status" output wraps wrongly and tests/bugs/bug-861542.t fails (every time). Probably not as urgent to get done as the rackspace-regression-testing code though. > I like your stage-1 idea. In previous jobs we had a script called "presubmit.sh" which did all you have described there. I'm not sure if forcing developers is a good idea, though. I think that if we shape up Jenkins to do the right thing, with the stages implemented there (and run optionally by the developers -- I would like to run them before I submit), then this issue would be resolved. Yeah. Though we *must* also make our regression tests run reliably. The problems found running regression tests in Rackspace are very likely not Rackspace specific. Every test failure I've looked at in depth (so far) has turned out to be a bug either in GlusterFS or in the test itself. If we can't get the tests to run reliably, then we're going to have to do silly things like "run each test up to 3 times, if any of them pass 100% then report back SUCCESS, else report back FAILURE". While it'd probably technically work for a while, it'd also be kind of unbelievably lousy. (but if that's what it takes for now... ) ;) > On 05/07/2014 03:27 PM, Harshavardhana wrote: <snip> >> stage-1 tests - runs on the Author's laptop (i.e Linux) - git hook >> perhaps which runs for each ./rfc.sh (reports build issues, other >> apparent compilation problems, segfaults on init etc.) >> This could comprise of >> - smoke.sh >> - 'make -j16, make -j32' for parallel build test >> - Unittests My understanding is that patch authors are already supposed to run the full regression test before submitting a patch using rfc.sh. It doesn't seem to happening consistently though. One of the problems with doing that, is it pretty much ties up the patch author's laptop until it's finished, unless it's run in a VM or something (recommended). <snip> >> On Wed, May 7, 2014 at 12:00 PM, Luis Pabon (Code Review) >> <review@xxxxxxxxxxxxxxx> wrote: <snip> > >>> Good point, but unit tests take no more time to compile, and only take 0.55 secs to run all of them (at the moment). Is this really an issue? For 0.55 seconds, not really. Was just mentioning the principal. ;) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel