Hello, I have added some notes below. > The other thing to settle is what we actually want to report. Matt > provided the following guidelines: > > - the current state of the subproject > - future plans > - things the team needs from the rest of the project > - any blockers we can help unblock > - big resource requests? > > Current state > ------------- > > I'd say we have a fairly solid process in place now for release > validation, and everyone's probably more or less familiar with it. We > also have the update testing process nailed down, though I still think > we can improve it in some ways, and we *still* want Bodhi 2.0 :) > > We have taskotron running for some package tests and openQA for some > installation validation tests. openQA is I think still running on > openSUSE boxes, but we have some work done on containerization to > potentially move them to Fedora hosts (and I've also been working on > packaging the openQA bits for Fedora). > > I think we might still be lacking a bit in terms of Change testing; > there's a personpower issue there, though. We usually wind up only > cherrypicking the most obviously potentially disruptive Changes for > testing, and usually from the perspective of 'does this break the > release'; we rarely manage to find the time to test Changes *in > themselves* and help make sure they work well. I think that is a good summary. To provide more background for those not familiar with what has been happening lately, we used openQA (an openSUSE project) in Fedora 22 cycle to test about 20-25 different installation test cases, using the tester name 'coconut'. We don't unfortunately have this system running in public (even though Adam's development machine was), but the pass results were sent into our wiki, and the failures were examined manually. I think this was quite a success and it saved us a lot of manual work. It also detected some issues, even though most of those issues were detected manually as well. But the main benefit here is, I believe, that we can skip a lot of repetitive manual testing (because we know it's covered by openQA) and focus on handling with real bugs, do more exploratory testing, etc. On the Taskotron front, led mainly by Tim and Martin in the recent days, there has been some improvements and a lot of bug fixes. They are mostly hidden under the hood, but one thing I would mention was enabling tasks to store "task artifacts" - basically any files useful for later review. This is currently not yet exposed in the ResultsDB UI, but the rest of the code is there. Another invisible area which has been improved is that we've been working our way towards disposable test clients. Those will allow us to run destructive checks, and potentially any third-party checks/tasks in the future. This is by no means finished yet. > > Future plans > ------------ > > We have a fairly well-defined roadmap for Taskotron development, I > believe; we could broaden its actual test coverage, but I think we > still have some infrastructure work that probably needs nailing down > before we can focus on that, particularly in terms of the disposable > test client work, and also I believe in terms of results management. Disposable test clients are now our current priority in Taskotron. We also work on emitting fedmsg notifications, we need that for integration with Bodhi2. Another middle-term feature could be, I think, finishing up task artifacts to show up in ResultsDB and then using that functionality to displaying only relevant parts of test logs to maintainers (e.g. pidgin maintainer should only see test output relevant for pidgin, and not also everything else) - the last part is actually already implemented, but the dots are not connected yet. > > I'd definitely like to see us extending openQA coverage in the post-F22 > 'quiet time'. There are still many validation tests that could > successfully be automated, which gives us much better coverage and > reduces the manual testing burden. I'm certainly intending to help work > on this. We could also consider soliciting wider contributions of > openQA tests; I think the openQA experiment has been fairly successful, > and I could certainly envisage some cases where maintainers might want > to have tests related to their packages in openQA. I don't really know what Adam's vision was in his last sentence. But in general, yes, automating more installation/fedup/system basics tests seems to be in the plans and worthwhile. > > Something I've been kicking around with release engineering a bit, and > which will probably come up at Flock, is the possibility of revising > the whole TC/RC approach to release composing. It's getting fairly old, > and is built on some assumptions that are becoming somewhat outdated: > that building and consuming composes are inherently expensive > operations and hence need to be 'rationed' and carefully, manually > handled. In an ideal world, I think it would be nice to take a more > modern, CI-ish approach: we should be doing the whole compose process > daily (not just boot.isos and lives) and running automated tests on the > daily composes (openQA is already capable of this), and improving the > efficiency of the process whereby updates get into composes. The > 'special package requests' in TC/RC compose requests have become kinda > a permanent feature, but they were never meant to be. I'd prefer if we > could make the process whereby updates move to stable vastly more > efficient, and perhaps have a slightly different process for handling > builds we want to put in TCs/TCs but which we don't yet want to push > stable (the most common case being the packages that are actually > involved in *doing composes*, so in order to test them, we need to > build composes with them) - something more visible and less clunky than > the current 'list them in TC/RC requests and they go into a manually > maintained bleed repo' approach. > > Things we need > -------------- > > Kinda tied in to the above: Bodhi 2.0, and more efficient releng > processes, are the big two I can think of - faster update pushes and > composes. I think tflink and kparal are in a better position than I to > know about any major resources we need. I don't say we "need" Bodhi2, but it would definitely help us to get rid of some very nasty code (pushing comments to Bodhi), probably improving speed and reliability of our checks considerably (we often crash on Bodhi server errors). Another thing that would help a lot, I think, are some improvements in the TC/RC engineering process, for example sending fedmsgs after a TC is complete, or having a better structure metadata describing this compose (for example, a json structure allowing us to see what different ISO images are available for Server product, their filepaths and types - DVD, netinst). We have a lot of black magic for this already, and it mostly works, but such improvements would simplify the code a lot of make it more reliable, especially when some changes are introduced. For Taskotron itself, I don't have a feeling we're blocked on something or urgently need something done, but from the infrastructure point of view, Tim will definitely be able to provide much better details. -- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test