Re: [fedora-qa] Issue #568: Proposal to split the Desktop Menus Testcase.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 7, 2018 at 10:44 AM Lukas Ruzicka <lruzicka@xxxxxxxxxx> wrote:
Hello Fedora QA and friends,

the *Desktop Menus Testcase* is one of the most boring tests to do and it would be nice if OpenQA could do it for us. However, how the testcase is written, it cannot be automated because it relies on a human mind too much. I am proposing to split it into the following testcases:

1. *Desktop Menus Visuals* in which we would check if all menu entries have proper icons, names, etc.
2. *Desktop Menus AppStarts* in which we would test if all menu items can be started (easy with OpenQA)

Is it really that easy? I see the following issues:
* You can implement a test for each application that matches some portion of its interface to make sure the app started successfully. E.g. for Gedit that could be Open button in top left, or for Nautilus that could be a portion of its side bar. That will require frequent maintenance, though (every time the font changes, or widgets move a few pixels, etc).
* Or you can just match the close button icon ❌ for each application, but that has the disadvantage that e.g. an error dialog (with the same close icon) can be mistakenly considered for a properly-started application.

Furthermore, I can't imagine how you'd make sure that *all* applications have been really tested, using OpenQA. Let's say there are 20 default applications in F29 Workstation, and you implement the test for all of them. If a new default application is added in F30 (so there are 21 of them), OpenQA can't spot that - it will claim that all applications start successfully, but it only covers all apps known to OpenQA, not all apps available in that environment. So this will need regular verification, which is unfortunately quite easy to forget about.

I'm not saying it's not a good idea to use OpenQA to help us with this problem. I'm just not seeing it as that easy.

 
3. *Desktop Menus Basic Functionality* in which we would test if the apps basically do work (perhaps can be automated in OpenQA if the basic functions are not too complicated)

As long as we do this manually, I'm not sure what this proposal really brings us, except for (ironically) more bureaucracy work with the matrices. If I test basic functionality manually, I automatically see whether the menu item visuals are OK (name, icon) and whether the app can be started (kinda prerequisite for basic functionality:)). So I'm not sure where we save work with separated test cases. Can you clarify where you see the time saved with this?

Please note that the current state of the testcase doesn't prevent us from automating some parts of that in OpenQA already. It's always helpful to be notified ASAP that something broke in Rawhide/Branched, even if it doesn't have a matching results field in the test matrix (e.g. the testcase stays as it is, requiring humans to do all the work, but OpenQA tests a few selected apps on its own). I'd even say it would be better to start automating this first, see how hard it is, and then adjust the test case based on our experience of how well this works in OpenQA. If we e.g. automate basic functionality testing for gedit and gnome-calculator, we can then talk about separating those out somehow (so that OpenQA can fill those results into the matrix) and the testcase could require humans to test all apps not covered by OpenQA.

_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux