Re: Gerrit review, submit type and Jenkins testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2016-01-08 at 12:03 +0530, Raghavendra Talur wrote:
> Top posting, this is a very old thread.
> 
> Keeping in view the recent NetBSD problems and the number of bugs creeping
> in, I suggest we do these things right now:
> 
> a. Change the gerrit merge type to fast forward only.
> As explained below in the thread, with our current setup even if both
> PatchA and PatchB pass regression separately when both are merged it is
> possible that a functional bug creeps in.
> This is the only solution to prevent that from happening.
> I will work with Kaushal to get this done.
> 
> b. In Jenkins, remove gerrit trigger and make it a manual operation
> 
> Too many developers use the upstream infra as a test cluster and it is
> *not*.
> It is a verification mechanism for maintainers to ensure that the patch
> does not cause regression.
>
> It is required that all developers run full regression on their machines
> before asking for reviews.

Hmm, I am not 100% sure I would underwrite that.
I am coming from the Samba process, where we have exactly
that: A developer should have run full selftest before
submitting the change for review. Then after two samba
team developers have given their review+ (counting the
author), it can be pushed to our automatism that keeps
rebasing on current upstream and running selftest until
either selftest succeeds and is pushed as a fast forward
or selftest fails.

The reality is that people are lazy and think they know
when they can skip selftest. But people are deceived and
overlook problems.  Hence either reviewers run into failures
or the automatic pre-push selftest fails. The problem
I see with this is that it wastes the precios time of
the reviewers.

When I started contributing to Gluster, I found it to
be a big, big plus to have automatic regression runs
as a first step after submission, so that a reviewer
has the option to only start looking at the patch once
automatic tests have passed.

I completely agree that the fast-forward-only and
post-review-pre-merge-regression-run approach
is the way to go, only this way the original problem
described by Talur can be avoided.

But would it be possible to keep and even require some
amount of automatic pre-review test run (build and at
least some amount of runtimte test)?
It really prevents waste of time of reviewers/maintainers.

The problem with this is of course that it can increase
the (real) time needed to complete a review from submission
until upstream merge.

Just a few thoughts...

Cheers - Michael

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux