Hi, everyone. So, in the recent debate about the update process it again became clear that we were lacking a good process for providing package-specific test instructions, and particularly specific instructions for testing critical path functions. I've been working on a process for this, and now have two draft Wiki pages up for review which together describe it: https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_creation the first isn't particularly specific to this, but it was a prerequisite that I discovered was missing: it's a guide to test case creation in general, explaining the actual practical process of how you create a test case, and the best principles to consider in doing it. The second is what's really specific to this subject. It describes how to create a set of test cases for a particular package, and a proposed standardized categorization scheme which will allow us to denote test cases as being associated with specific packages, and also denote them as concerning critical path functionality. Given that mediawiki has a handy API which also allows you to deal with categories, this should make it easy to both manually and programmatically derive a list of test cases for a given package, and a list of *critical path* test cases for a given package. You can do this manually, but I also envision Bodhi and fedora-easy-karma utilizing the API so that when an update is pushed for a package for which test cases have been created under this system, they will link to those test cases; and when an update is pushed for a critical path package, they will be able to display separately (and more prominently, perhaps) the list of test cases relevant to the critical path functionality of the package. Comments, suggestions and rotten fruit welcome :) I'm particularly interested in feedback from package maintainers and QA contributors in whether you feel, just after reading these pages, that you'd be confident in going ahead and creating some test cases, or if there's stuff that's scary or badly explained or that you feel like something is missing and you wouldn't know where to start, etc. The trac ticket on this is probably valuable for background, explaining why some things in the proposal are the way they are: https://fedorahosted.org/fedora-qa/ticket/154 it also mentions one big current omission: dependencies. For instance, it would be very useful to be able to express 'when yum is updated, we should also run the PackageKit test plan' (because it's possible that a change in yum could be fine 'within itself', and all the yum test cases pass, but could break PackageKit). That's rather complex, though, especially with a Wiki-based system. If anyone has any bright ideas on how to achieve this, do chip in! Thanks. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org http://www.happyassassin.net -- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test