Re: Streamlining backports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd like to add my 2c from the human testing standpoint.
While we are discussing issues on how to improve PRs testing we also
test numerous PRs semi-manually on a daily basis.

Ever since we started adding labels automatically to PRs it actually
became more difficult to understand from the auto generated labels
what type of tests/suites are to be run.

For example when we have labels "fs" and "core" that means that
respective suites/tests are to be executed.
That is good if those labels used were true, but in many cases it's
simply not the case, e.g. no need to run 'fs' and 'rados' (in this
example, only one would suffice).

So if we do use automation to add labels to PRs it has to be true,
otherwise we will be over utilizing resources for unneeded tests'
runs.

Thx
YuriW


On Thu, Apr 22, 2021 at 11:17 AM Nathan Cutler <ncutler@xxxxxxxx> wrote:
>
> Hi Ernesto:
>
> I fully concur with what Loic wrote, and just to add to that:
>
> Years ago, a lead developer gave us some general principles that all backports
> should ideally follow. These are codified:
>
> https://github.com/ceph/ceph/blob/master/SubmittingPatches-backports.rst#general-principles
>
> but I think it's worth quoting them here. Each backport is supposed to specify:
>
> 1. what bug it is fixing
> 2. why this fix is the minimal way to do it
> 3. why does this need to be fixed in <release>
>
> Now, how good we are, as a project, at adhering to these principles is already
> pretty questionable. How will introducing more automation help us improve?
> Or maybe we should change the principles to say: "the Ceph project encourages
> commits to be backported from master indiscriminately without any justification
> or risk analysis"?
>
> I guess we would not change the stated principles as suggested, but I still
> think that, when deciding what kind of automation to introduce, we should ask
> ourselves questions like:
>
> How stable are our "stable" releases?
> Do we value stability over features, or vice versa?
> How often does the drive to backport stuff introduce regressions?
> How to gauge the riskiness of a given backport?
> Do the answers to these questions vary from one component to another, or can
> answers be formulated on a project-wide basis?
>
> Backporting stuff is necessary, but also risky. Automation, I think, can
> actually increase the frequency with which we unintentionally introduce
> regressions into stable releases because
>
> * automation, if successful, might tend to increase the overall number of
>   backports
> * automation cannot provide any justification or estimate of risk, so it might
>   also increase the number of backports that lack any justification or
>   estimation of risk
>
> I'm skeptical of the value of labels, but I do think it would be useful to have
> Jenkins jobs checking:
>
> 1. whether the commits being cherry-picked are really in master
> 2. whether the master commits cherry-picked cleanly
> 3. whether the backport PR contains the same number of commits as the master PR
>
> These couldn't be "mandatory" checks because there are plenty of exceptions, but
> I think having this information there would be useful for reviewers (but I don't
> review backports so I don't know for sure).
>
> Nathan
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux