On Sat, Dec 18, 2021 at 10:49:53AM -0800, Adam Williamson wrote: > > This makes sense to me. It might also make sense for big changes to also > > include proposed updates to the validation criteria, just as modern software > > development expects new features to come with tests for those features. > > We do this, but only for *functional* requirements, which I think is > correct. I don't want us to be pinning software versions and what > specific implementation of a given function "must be" used in the > release criteria, in general, because it seems like a terrible > mechanism for it, and one that really wouldn't scale. Okay, fair enough — and I'm definitely not wanting to add _more_ automatic blockers. :) But it does seem like we should have _some_ set of automated testing that's linked to intentional, acccepted changes. Nano-as-default in Fedora Server is another one. Maybe even something where "getting the test hooked up" is the next step for the change owner after the change is accepted. Is there a way where change owners could plug into some of our existing automated testing to do that? -- Matthew Miller <mattdm@xxxxxxxxxxxxxxxxx> Fedora Project Leader _______________________________________________ test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure