Re: Reducing regression runs (hopefully)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 25, 2016 at 08:38:34AM -0400, Jeff Darcy wrote:
> > I have a few proposals to reduce this turn around time:
> >
> > 1. We do not clear the Verified tag. This means if you want to re-run
> >    regressions you have to manually trigger it. If your patch is rebased on
> >    top
> >    of another patch, you may have to retrigger failing regressions manually.
> > 2. We do give automatic +1 for regressions if the change is *only* in
> >    `extras/`, to MAINTAINERS file and other no-op changes. Please correct me
> >    here. I think the changes here do not affect regressions. If I'm wrong and
> >    they do, I'd love to know which files do affect regressions. I've taken
> >    the
> >    MAINTAINERS file as an example, I'm also curious to know what other no-op
> >    changes can be made.
>
> I think you're on the right track here, Nigel.  :)  I'd also add that changes
> to one .t file shouldn't require that any others be run (the same is not true
> of .rc files though).

I'm going to enable retaining the Verified flag from today, then. I've already
tested it on staging. If you modify the commit message or rebase the patchset,
you'll carry over Verified, CentOS-regression, and NetBSD-Regression labels.

>
> On the more ambitious side, I think we could also optimize around the idea
> that some parts of the system aren't even present for some tests.  For
> example, most of our tests don't even use GFAPI and won't be affected by a
> GFAPI-only change.  Conversely, GFAPI tests won't be affected by a FUSE-only
> change.  AFR, EC, and JBR code and tests are mutually exclusive in much the
> same way.  We have the burn-in test to catch "stragglers" and git to revert
> any patch with surprising effects, so IMO (at least on master) we should be
> pretty aggressive about pruning out tests that provide little value.

Raghavendra has been working on a patchset that does a better job of
segregating tests by module. I'll let him explain the specifics.

My vision in this regard is something like this:
* A patchset gets Verified +1.
* A meta job is kicked off which determines regression jobs to run.
  If the patch only touches GFAPI, we kick off the GFAPI regression tests. If
  it touches multiple modules, we kick off the tests for these specific
  modules.
* The results for each platform are aggregated under our current labels similar
  to how we aggregate results of multiple tests into one label for smoke.

Am I being too ambitious here? Is there something I'm missing?

--
nigelb
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux