Hi, folks! So I'm following up again on the request from the Council that we send someone to report on the QA project's status. The proposed date was 2015-06-08. At the last QA meeting (on 2015-05-04) I volunteered to do it, then realized I won't be able to on 2015-06-08 as I'll be on a plane. So, we have two choices: either someone else volunteers to do the report, or we ask the Council if we can change the date. Is anyone else interested in doing the report? Please reply if so! The other thing to settle is what we actually want to report. Matt provided the following guidelines: - the current state of the subproject - future plans - things the team needs from the rest of the project - any blockers we can help unblock - big resource requests? I have some initial thoughts, and it'd be good if anyone else can chip in: whoever reports to the Council, I think they should be acting as a representative for all of us as a whole, so it'd be good to gather everyone's thoughts together for the representative to pass along. Current state ------------- I'd say we have a fairly solid process in place now for release validation, and everyone's probably more or less familiar with it. We also have the update testing process nailed down, though I still think we can improve it in some ways, and we *still* want Bodhi 2.0 :) We have taskotron running for some package tests and openQA for some installation validation tests. openQA is I think still running on openSUSE boxes, but we have some work done on containerization to potentially move them to Fedora hosts (and I've also been working on packaging the openQA bits for Fedora). I think we might still be lacking a bit in terms of Change testing; there's a personpower issue there, though. We usually wind up only cherrypicking the most obviously potentially disruptive Changes for testing, and usually from the perspective of 'does this break the release'; we rarely manage to find the time to test Changes *in themselves* and help make sure they work well. Future plans ------------ We have a fairly well-defined roadmap for Taskotron development, I believe; we could broaden its actual test coverage, but I think we still have some infrastructure work that probably needs nailing down before we can focus on that, particularly in terms of the disposable test client work, and also I believe in terms of results management. I'd definitely like to see us extending openQA coverage in the post-F22 'quiet time'. There are still many validation tests that could successfully be automated, which gives us much better coverage and reduces the manual testing burden. I'm certainly intending to help work on this. We could also consider soliciting wider contributions of openQA tests; I think the openQA experiment has been fairly successful, and I could certainly envisage some cases where maintainers might want to have tests related to their packages in openQA. Something I've been kicking around with release engineering a bit, and which will probably come up at Flock, is the possibility of revising the whole TC/RC approach to release composing. It's getting fairly old, and is built on some assumptions that are becoming somewhat outdated: that building and consuming composes are inherently expensive operations and hence need to be 'rationed' and carefully, manually handled. In an ideal world, I think it would be nice to take a more modern, CI-ish approach: we should be doing the whole compose process daily (not just boot.isos and lives) and running automated tests on the daily composes (openQA is already capable of this), and improving the efficiency of the process whereby updates get into composes. The 'special package requests' in TC/RC compose requests have become kinda a permanent feature, but they were never meant to be. I'd prefer if we could make the process whereby updates move to stable vastly more efficient, and perhaps have a slightly different process for handling builds we want to put in TCs/TCs but which we don't yet want to push stable (the most common case being the packages that are actually involved in *doing composes*, so in order to test them, we need to build composes with them) - something more visible and less clunky than the current 'list them in TC/RC requests and they go into a manually maintained bleed repo' approach. Things we need -------------- Kinda tied in to the above: Bodhi 2.0, and more efficient releng processes, are the big two I can think of - faster update pushes and composes. I think tflink and kparal are in a better position than I to know about any major resources we need. Hope that kicks off a few thoughts from other folks! Please don't be shy, and pitch in with any thoughts you have :) -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net http://www.happyassassin.net -- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test