I generally agree with Adam's points. We don't want to add tests for every accepted Change proposal. In particular, the "updating X to version Y". That said, I think we should be checking for certain paradigm-shift or otherwise notable changes. Changes to the default filesystem, moving to Wayland by default, etc. are obvious candidates. Some key "how" or performance changes, too (e.g. WirePlumber, systemd-resolved). Looking at the last four releases, there are 2-3 per release that I would put on that list. One thing we could do is to require a check with QA in Change proposals (the way we require a check with Rel Eng). This would be a good, early opportunity to adjust criteria or test cases, when needed. I do think we should be careful about relying *solely* on our release criteria. They're an important part of the process and give us a lot of objectivity and predictability. But we're not producing a release to meet the criteria, we're producing a release to give our users an experience. (I don't love that phrasing, but I hope the intent is clear enough.) So if we're shipping something that passes the criteria and test cases, but doesn't give the intended experience, then our criteria/tests are wrong. We'll never be able to test everything exactly, but if there's a way we can close the gap in a sustainable way, we should do that. -- Ben Cotton He / Him / His Fedora Program Manager Red Hat TZ=America/Indiana/Indianapolis _______________________________________________ test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure