On Sat, 2021-12-18 at 15:16 -0500, Matthew Miller wrote: > On Sat, Dec 18, 2021 at 10:49:53AM -0800, Adam Williamson wrote: > > > This makes sense to me. It might also make sense for big changes to also > > > include proposed updates to the validation criteria, just as modern software > > > development expects new features to come with tests for those features. > > > > We do this, but only for *functional* requirements, which I think is > > correct. I don't want us to be pinning software versions and what > > specific implementation of a given function "must be" used in the > > release criteria, in general, because it seems like a terrible > > mechanism for it, and one that really wouldn't scale. > > Okay, fair enough — and I'm definitely not wanting to add _more_ automatic > blockers. :) > > But it does seem like we should have _some_ set of automated testing that's > linked to intentional, acccepted changes. Nano-as-default in Fedora Server > is another one. > > Maybe even something where "getting the test hooked up" is the next step for > the change owner after the change is accepted. Is there a way where change > owners could plug into some of our existing automated testing to do that? So, there is kind of a scaling issue there. AFAIK openQA is still the only system actually doing compose-level automated testing, and it just isn't really endlessly scalable, especially as currently deployed (on a limited amount of physical hardware) and monitored (...mostly by me). We need to pick what we focus on with it quite carefully. I could add a "is-nano-default" test to it, sure, but...where do we stop if we go down that route? There are, what, 20+ Changes per release? If we start adding tests related to even half of them per cycle, we're piling up tests at a fairly rapid clip, and it would start to cause issues with manageability quite quickly. Never mind if we went back and did it for the hundreds or more Features and Changes we've had in the past. I have added a couple of tests of this approximate type (like a "is GTK+ 2 installed by default?" test) recently, because the desktop team asked nicely, but that was my capricious whim and if they asked for more I might say no :P I think this might be more feasible if we managed to implement compose- level testing in Fedora CI, for the benefits of maybe a wider base of people to maintain and monitor results, more capacity, less overhead (you don't need all of openQA's screenshot-monitoring, video-recording special sauce to implement a "is nano default?" test, really), and somewhat fewer domain-specific knowledge requirements in maintaining and monitoring tests... -- Adam Williamson Fedora QA IRC: adamw | Twitter: adamw_ha https://www.happyassassin.net _______________________________________________ test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure