Re: Future of Test Cases and their Management in Fedora QA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have played with https://kiwidemo.fedorainfracloud.org for a few days and I have to say I'm quite disappointed by it :-( I thought it would be closer to what we need, with some tweaking. But currently I'm hitting serious obstacles and the user experience for a regular tester is not great either. It seems that Kiwi is designed in a very rigid way where each person gets some area of responsibility and they work just there, or there's a smaller team of educated testers which can easily collaborate on a shared area. Other approaches feel cumbersome.

First, you need to create a test run, in order to test something. When creating the test run, you have to define which test cases will be included. You can populate it from a test plan, which we can prepare, but either we'll have many many smaller test plans with a couple of test cases, or a couple of large test plans with tens of test cases. It feels either constraining or overwhelming to work this way. You can't just simply pick one random test case which hasn't been tested yet, like with our wiki matrices. Moreover, searching for those test plans is tedious, always filling out Product, Version, etc. You're provided with a database list of test plans, without any logical layout, milestone priorities etc, as on our wiki. When creating a test run, you have to fill out some fields, like Build. Perhaps we can provide pre-created hyperlinks which would forward testers into a fully filled out page and they'd just click a button. After you create a test run, you have to start it, and after you're done, you have to end it. You don't actually need to, because you can populate results even if it hasn't started/has already ended, but certain views will display in-progress test runs and some won't (unless you display only in-progress test runs). This feels like bookkeeping feature more than anything else, and it's a nuisance for us. People will forget to end their test runs.

Instead of everyone creating their own test run (actually many test runs), it seems possible to create one single "shared" test run and distribute the link to it. Other people can fill out the results, even if they haven't created it. However, there are serious disadvantages:
1. Only a single result can be provided for a test case. So no longer we'd be able to see results from multiple people, or openqa and an additional manual tester, for one particular test case. As described above, this feels very "corporate style of work", but very different from what we need.
2. It is extremely easy to overwrite somebody else's result by accident. The test run page doesn't update automatically, you need to manually refresh it. So if you simply submit a result for a test case which looks untested, somebody else might have submitted the result already (a few hours earlier, possibly) and your result will simply overwrite it. There's no warning, nothing. On wiki, we at least get a warning if people edit the same page/section simultaneously and there's an edit collision. This is a complete no-go for us. I even wonder how corporate teams can avoid this situation if multiple people work on the same test run, unless they're sitting in the same office space talking to each other.

Because sharing test runs is a no-go, individual test runs is the way to go. But there's another big hurdle. How do you figure out what should you test, i.e. it hasn't been tested yet by somebody else? I only found a single solution to this, which is by looking into the "Status matrix". Unfortunately that page is probably the worst one I encountered in Kiwi. There's no dedicated Search button, it performs a search on any field change (and initially shows the whole database, if you wait long enough - ooof). There's no progress indication, so you never know if a search was initiated, if it is complete, if empty results page is actually what you should see or not. The search params are not placed into URL, so every time you want to refresh the page to see updated results, you have to fill out all search fields again. The page readability is decent if only a low number of test runs is displayed. But if people create multiple test runs (for each smaller test plan/test section, because of submitting results before lunch/after lunch, etc), the readability gets much worse and it's an exercise in horizontal scrolling. Our testcase_stats (e.g. [1]) do much better job in clearly showing what was run and what wasn't, what was the last tested build and where were some issues. Especially if you want to show the history, not just the current build. And of course the page again provides just a database list of test cases, but doesn't structure them in any way, so we can't e.g. display milestone priorities etc in that list. In essence, the more test runs are performed, the harder it is to figure out what wasn't run yet, any test case structuring is not possible, and you can't refresh the page to see the current state.

The Product/Version/Build organization is usable, I think, but currently conflicts with our way of using "environments" (different arches, bios vs uefi, etc). Tim tried to solve it by embedding the environments into Product, and also by populating the image names in there, but I'm not sure it can work this way. For example, Server tests which are run just once in general can be in the "Server" product. But if we want to distinguish between netinst and dvd, on either uefi or bios, on three different architectures, that would just explode the Product values. It makes managing the test plans and seeing the current test runs results a nightmare.

Kiwi supports some test case properties and defining applicable environments as an experimental feature. It is not documented in their docs, but by some trial and error I managed to define "firmware=bios, firmware=uefi" on a test case, and then it got split up into two different test case executions (one with bios, one with uefi) when a test run is created. I find it a bit weird that those properties are defined in a test case rather than in a test plan, but it would work for us, I think. Unfortunately currently even when you can submit the results for these "environment-defined test cases", they are not displayed in the Status matrix. When a test case is split into two, it shows up as a single result there, with whatever result was submitted for the first half of it. So while this could solve some of our needs, currently it doesn't seem finished (unsurprisingly, since it's marked experimental).

So, overall, the user experience of submitting individual test case results is, I think, good, but everything around it (determining which test cases haven't been tested yet, getting some guidance what to focus on, creating and managing your test run, navigating the system, getting an overall picture of testing) is worse than with our wiki, sometimes considerably. (Of course I could have misunderstood many Kiwi's features, I never worked with it before. I'll be happy to be corrected). I think that we could work around the issues by creating:
a) a startup page, where people would see our test cases logically separated (installation/desktop, Basic/Beta/Final, storage/package management/etc), and just click pre-created hyperlinks leading to Kiwi forms with all necessary input fields filled out already (assuming Kiwi supports that). This would save people from searching in endless drop-down menus, possibly selecting wrong things. It would also guide them on what to do ("this section is more important than that one because it's Beta and the other is Final").
b) an overall results page of the compose. This would help people figure out what test cases need attention (not tested yet, marked as failed, etc) and what the overall status of the compose is. This can be combined with a), of course.
c) an updated testcase_stats page to see the historical progress
Pages b) and c) would pull from Kiwi and auto-generate the current picture.
The problem of environments would be solved graphically by our own pages, and in the Kiwi internals, we'd probably do some ugly duplication or explosion of fields like Product.

I'm honestly not sure if this is a good idea. It's a lot of work, and the end result doesn't seem to be much better than what we currently have. Yes, there is a structured results storage and a nice results submission page. On the other hand, we'd need to maintain a hodge-podge of integration code which can easily break, and yet-another-tool instance. Not to mention we could create those part ourselves (use resultsdb and some simple html pages for submission), if we wanted to invest time into it. Speaking as someone who really dislikes the NIH syndrome and tries to use existing projects as much as possible, it seems we'd gut Kiwi too much to make it worth it. I was hoping we would be able to use it with just some slight tweaks, but so far, from what I've seen, it doesn't seem that way.

I'll be happy if more people spend some time in it and post their thoughts/impressions.

Kamil

[1] https://openqa.fedoraproject.org/testcase_stats/36/Installation.html

_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux