Hi, we are currently looking into enabling us to test laptops more effectively. There are two main parts to the issue, which is to 1. have a system to run semi-automated tests on a standalone machine and submit the results to an online server ("Fedora Tested Laptops") and to 2. run parts of the tests in a fully automated fashion in a lab here in Munich. For now I am probably going to concentrate on the first part, but full automation is still something to keep in mind. Some automation might also happen without a full CI setup (e.g. simulate the lid switch or plugging in different monitors using the chameleon board). Focusing on the feature set the test runner should have, I see the following requirements: * Online submission of results - Initially probably just manual updates and uploads to the wiki - Fedora has resultsdb, but it is not designed to store larger blobs * Ability to run standalone on a machine - Resume test after interruptions like kernel panics - Show tests status and user instructions for tests requiring interaction, but allow the test to run automated when servo is available. - Allow skipping any tests requiring user interaction * Possible to integrate into a CI setup * Gathering of data about hardware before and during the test - e.g. dmidecode, power usage, CPU states, firmware tests So far I had a closer look at the at the following tools: * OpenQA (http://open.qa/) * autotest (python, http://autotest.github.io/) * avocado (python, https://avocado-framework.github.io/) * resultsdb * taskotron Right now I think that avocado (a successor for autotest) is the best fit and can be adapted to the above needs. The only real advantage of autotest is that Google uses it on a large scale for testing chromebooks, but it seems harder to adapt and use. Most of the other tools cover other parts of a CI infrastructure. With this in mind, my current plan would be to work on the following items using avocado as a base: 1. Integrate a test status plugin including the ability to prompt for fine grained user instructions (maybe using DBus) 2. Work on support to resume interrupted runs (i.e. kernel panic) 3. Create data collection plugins and add features where sensible (e.g. maybe add RAPL power monitoring into upower) 4. Start writing test cases to exercise the above Opinions? Have I missed something important maybe? Benjamin _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx