Re: [fedora-qa] Issue #568: Proposal to split the Desktop Menus Testcase.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 7, 2018 at 4:02 PM Lukas Ruzicka <lruzicka@xxxxxxxxxx> wrote:


Furthermore, I can't imagine how you'd make sure that *all* applications have been really tested, using OpenQA. Let's say there are 20 default applications in F29 Workstation, and you implement the test for all of them. If a new default application is added in F30 (so there are 21 of them), OpenQA can't spot that - it will claim that all applications start successfully, but it only covers all apps known to OpenQA, not all apps available in that environment. So this will need regular verification, which is unfortunately quite easy to forget about.


Well, yes. It will. However, the situation, as it is now, needs regular testing, which is likely to be put aside until the very last day before freeze (and I am sure the test statistics do prove it - https://www.happyassassin.net/testcase_stats/28/Desktop/QA_Testcase_desktop_menus_Release_blocking_desktops___lt_b_gt_x86___x86_64_lt__b_gt_.html), for this is the most monkeylike testing business we have and it really takes strong nerves to do it. With automation, we still would have to keep an eye on that, but we could be doing that once per release and still be able to test the majority of daily composes.

I agree it's an extremely annoying test case and it would be nice to make the experience more pleasant somehow. But I'm very skeptical that our current automation tools can solve it completely. That's why I suggested a gradual approach of implementing basic functionality testing for a few selected apps (the simplest ones) and see how it goes (see my previous email). Of course the scripts need to be separate, so that failure in one doesn't crash the whole test.
 

 
 
3. *Desktop Menus Basic Functionality* in which we would test if the apps basically do work (perhaps can be automated in OpenQA if the basic functions are not too complicated)

As long as we do this manually, I'm not sure what this proposal really brings us, except for (ironically) more bureaucracy work with the matrices. If I test basic functionality manually, I automatically see whether the menu item visuals are OK (name, icon) and whether the app can be started (kinda prerequisite for basic functionality:)). So I'm not sure where we save work with separated test cases. Can you clarify where you see the time saved with this?


Yeah, you are right here, again. So why not take it a step further:
We could create a submatrix of all basic applications

That's a lot of results fields, for all apps in GNOME, KDE and XFCE (because of arm). Or do you suggest to use different approach for different environments? Even if we did it just for GNOME, it would be a bit annoyingly large. I'm not completely against it, but I don't see a reason to do this just for GNOME, and I'm really scared of seeing this for all 3 environments.

Anyway, I don't think we really need to do this as the first step. We *don't* need a results field to automate something. All we need a notification when a test fails, or someone regularly looking into OpenQA web ui (and I believe Adam does that). That's why I suggested (see my previous email) to write some PoC in OpenQA, see how it goes, and it if works well, we can consider separating those apps in the results matrix, so that we can easily see their results when doing release validation and avoid re-testing those apps.
 
(that does not change with a compose)

Not sure I follow. The pre-installed apps might change any time during the development cycle, either intentionally or by accident.
 
and write a simple terminal script that would be able to report the installed version, the date, and the result of the test case to that matrix, so anybody who would work with one of those applications could run:

desktoptest fail gnome-maps, or
desktoptest pass gnome-maps

and all results would be collected, so after some time, we could see how many and when were tested and for the compose matrix we could be testing just those which have not been tested otherwise.

If only we had a proper TCMS, right? With our current solutions, my concerns from above apply.

_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux