Re: Package Update Acceptance Test Plan - final call

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- "James Laska" <jlaska@xxxxxxxxxx> wrote:

> > = Test Priority =
> > 
> > 1. Do we want to have the strict requirement that introspection and
> > advisory tests may be started only after all mandatory tests have
> > finished? In other words, do we want to have the testing sequence
> > like this:
> > 
> > -> mandatory tests -> (wait for completion) -> introspection + 
> > advisory tests
> > 
> > or this:
> > 
> > -> mandatory + introspection + advisory tests (no particular order)
> > 
> > Reason for (the first sequence): It prioritizes the most important
> > tests, they will be finished sooner. It can save resources in case
> > some mandatory test fails and we won't run any subsequent tests
> > (see question 2).
> > Reason against: More difficult to implement. Less effective from
> > overall performance view.
> 
> My preference would be the path of least resistance, which seems like
> the second approach.  Should we suffer from slow mandatory test
> results
> due to introspection and advisory tests being run first, we have test
> scheduling options to explore to address the problem.  Whether it's
> prioritized/weighted jobs or grouping the tests into the three
> buckets
> I'm not sure yet ... but I feel like that's something we can address
> in
> the future.

Same opinion here, thanks, document updated.

> 
> > = Test Pass/Fail Criteria =
> > 
> > 2. Do we want to continue executing tests when some mandatory
> > test fails?
> > Reason for: We will have more results, maybe the maintainer will
> > have a look at all the results and may fix more issues at once
> > (not only the failed mandatory test).
> > Reason against: It wastes resources - the update will not be
> > accepted anyway. When a mandatory test fails (like installability
> > or repo sanity), many follow-up tests may fail because of that,
> > so they may not produce interesting output anyway.
> 
> Let's make our lives simpler at first ... schedule all the tests. 
> Even
> if mandatory results fail, having the introspection and advisory
> results
> available for later review or comparison will be helpful.
> 
> That said, as soon as the mandatory tests have failed, we can
> initiate
> whatever process needs to happen to ensure that package update is not
> accepted.  However, the other tests would continue to run and report
> results into the appropriate place.

Again I fully agree. Updated.

> 
> > 3. We should complement requirement "all tests have finished" with
> > a definition what happens if some test crashes. It is obvious that
> > we can't accept package for which some substantial (read mandatory
> > or introspection) test has crashed. So we can add a requirement
> > like this:
> > "all mandatory and introspection tests have finished cleanly (no
> > test crashes)"
> > The question remains - what about advisory tests? Do we require
> that
> > they also don't crash? Or even if some of them crashes it won't be
> > an obstacle for accepting the update?
> > Reason for: Advisory tests are not important for accepting the
> > update, a test crash should not cause rejection.
> > Reason against: Some information won't be available. It could
> happen
> > that those information would cause the maintainer to withdraw/renew
> > the updated package.
> 
> Long-term, all these tests need to run and results presented to the
> maintainer.  Short-term while we are piloting this effort, I don't
> have
> a problem allowing a build to proceed once mandatory tests have
> completed, but advisory/introspection tests failed or aborted.

I would prefer to cover longer-term objectives in the document
rather then pilot stage. I have added a requirement:
"# all mandatory and introspection tests have finished cleanly 
(not crashed or otherwise aborted)"

> 
> > = Introspection tests =
> > 
> > 4. Rpmlint - "no errors present" vs "no new errors present": It
> > is obvious we have to provide an option for package maintainers to
> > whitelist some rpmlint messages. The actual implementation of the
> > whitelist is still to be discovered, but that doesn't matter, it
> > will be there. In this respect is seems to me that "no new errors"
> > requirement has no benefit over "no errors" requirement, because
> > the latter one does everything the prior one and even more. When
> > it is possible to whitelist errors then there's no reason for us
> > to allow any errors in the output.
> > Implementation note: "no new errors" is more an rpmguard's task
> > than rpmlint's. We can implement that as an rpmguard check and put
> > it to introspection tests, that's possible. But it's not needed if
> > we agree on "no errors" requirement.
> 
> "No new errors" seems like an achievable short-term approach to add
> value while not overloading the maintainer with a potentially large
> list
> of rpmlint failures (/me thinking kernel) that they haven't been
> tracking since the package was initially incorporated into Fedora. 
> Down
> the road, I agree a whitelist mechanism would be ideal (as well as a
> shortcut to file a bug against rpmlint).

The whitelist mechanism is surely a must-have, otherwise we would
end up impaled on a stakes, quartered and burnt alive :)

"No new errors" could work for some packages, but it wouldn't work
for many others, especially when a version number is mentioned in the
message - it will change for every release, so we would see it
as a new error anyway. Kernel package is a nice example of this.
A few references:

https://fedorahosted.org/pipermail/autoqa-results/2010-April/013921.html
https://fedorahosted.org/pipermail/autoqa-results/2010-April/016838.html
https://fedorahosted.org/pipermail/autoqa-results/2010-April/016436.html

I believe when AutoQA goes live it will not be enforcing anything
for a long time, just providing additional info. There will be a
lot of time when maintainers may solve these issues. I wouldn't
worry too much, we can also disable some test if it seems to
be causing troubles, until it is resolved.

> 
> > =  Responsibilities and permissions =
> > 
> > 5. "Introspection tests failures may be waived by:"
> > 5a. Should Release Engineering be there?
> 
> No, let's just go with below.  And anyone can demonstrate required
> test
> skills and join the group, including release engineers.
> 
> > 5b. Should we add the new proventesters group?
> 
> Oh good catch, yes!

I have removed RelEng and also QA (on jkeating's suggestion) and
replaced them with proventesters team. Or would it be better
to leave the whole QA team in?

> 
> Hope this helps,
> James
> 
> -- 
> test mailing list
> test@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe: 
> https://admin.fedoraproject.org/mailman/listinfo/test
-- 
test mailing list
test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe: 
https://admin.fedoraproject.org/mailman/listinfo/test

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux