> My vision in this regard is something like this: > * A patchset gets Verified +1. > * A meta job is kicked off which determines regression jobs to run. > If the patch only touches GFAPI, we kick off the GFAPI regression tests. If > it touches multiple modules, we kick off the tests for these specific > modules. I think we do need to be a bit careful here. Tests for one component often do affect other components. For example, a change to xlators/protocol/server will potentially affect every test, so we can't just run tests for that one component. What I was getting at was the extreme case where we can be absolutely certain that component X is not even loaded for test Y. One could argue that we should let burn-in catch interactions between components that are "distant" from one another, and not test one for every single change to the other. I'm not against that idea, but it's not quite the same as the case where X and Y could not *possibly* interact. > * The results for each platform are aggregated under our current labels > similar > to how we aggregate results of multiple tests into one label for smoke. > > Am I being too ambitious here? Is there something I'm missing? It's pretty ambitious, but only you can say whether it's too much so. If you feel comfortable with that level of Gerrit/Jenkins voodoo, go for it! _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel