Re: Fwd: [fedora-qa] Issue #568: Proposal to split the Desktop Menus Testcase.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 7, 2018 at 9:21 PM Lukas Ruzicka <lruzicka@xxxxxxxxxx> wrote:

My initial opinion is that it's not a useful idea, for two reasons:

1) Even with this split, automating any of this with openQA is *not*
particularly easy, unless I'm missing something. App icons change, and
the set of apps included in the live images changes. If you just make a
test which literally has needles for every single app icon and just
goes through a loop of opening the 'start menu', looking for an icon,
clicking on it, closing the resulting app, and going onto the next one,
that test is going to break *any time* an app icon changes or an app is
removed from the image. (There's also the possibility that background
or transparency changes break *ALL* the icon needles at once, which
would be a nightmare).

What if we started the applications with Alt+F2? That works and does not need to check
for icons.

That verifies that the app can be started, but doesn't verify that the menu item works (or is even present) and doesn't necessarily run the same command (unless we parse it dynamically from the desktop file). It's certainly useful, but it doesn't completely verify the test case "make sure the app starts when executed from the menu".

Again, this brings me to our unbalanced state of automated testing vs reporting. Automated testing is good for evaluating small discrete parts and not human-like overall jobs, that's why it makes sense to report the small pieces separately and granularly. Otoh our results matrix is primarly made for human consumption and operation, and thousand subresults is not something we really want to have in there. We would need a better tool to allow for granular reporting while also allowing easy human consumption (grouping the results into bigger units, etc).

What we can do is to execute the tests in OpenQA without reporting results to wiki, and just pay attention to any failures, or we can also submit the results to resultsdb and create another dashboard or something that will display the important bits to us. This would avoid the need to turn our existing wiki matrix into some monstrosity. Of course, disadvantages aplenty.

 

3) Even if we could deal with 1) and 2) somehow, having just one part
of the test automated isn't a huge win. Just knowing that the apps
launch and close isn't *that* useful...and having that part of the test
automated doesn't make it any *easier* for a human to do 3), really.
So, where's the win?


I agree that this does not solve the entire problem, therefore I have also proposed to focus on a given set of applications that we would test thoroughly and with the rest just do some really basic stuff. When you take a look at the statistics, the menu test is one of the least frequently tested ones. And we know why - nobody wants to click through all of this stuff, because half of the default gnome applications nobody ever really uses.

If the test case should do its job, we need to test it often and start early in order not to catch bugs just after the Final freeze. Otherwise, it has no sense anyway.

I believe we agree on the problem. We just need to crystallize the best implementation.

_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux