On Mon, 17 Oct 2016 17:18:25 +0000 Zbigniew Jędrzejewski-Szmek <zbyszek@xxxxxxxxx> wrote: > On Mon, Oct 17, 2016 at 12:45:30PM -0400, Matthew Miller wrote: > > On Mon, Oct 17, 2016 at 04:38:28PM +0000, Zbigniew > > Jędrzejewski-Szmek wrote: > > > It's a good principle to require both tests and fixes required for > > > those tests to pass to be submitted and merged as a single pull > > > request. I'd love to see a PR that adds a test for one of my > > > packages, exposes some bugs, but immediately fixes any fallout. I > > > would be less thrilled to have tests committed which will fail on > > > the next rebuild, leaving me to fix the package (or manually > > > override the tests). > > > > As I'm imaginging it, tests added in this way would be non-blocking, > > and would be expected to pass initially. > > "feel good" tests ;) But seriously, why should people write only tests > which pass? I think this starts getting into the question about what we want to do with the results. If we are going to start gating builds based on results from automation, we will need some way to indicate which results contribute to a failure or a pass as part of that gating process. The simple way to do this is to say "every test needs to pass for gating to happen without manual override" but I'm not sure that's the best solution here and I'm not aware of any existing proposals on how to manage that in a more granular way for build gating. I'm of the opinion that it would be best to get the checks/tests running and once we have enough of those to justify gating builds, we can start figuring out what kinds of things should fail a build. Even if we're not gating builds right away, more testing is being done and forward progress is being made. Tim
Attachment:
pgpHTcncnr4Qy.pgp
Description: OpenPGP digital signature
_______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx