On Fri, 2009-04-03 at 03:35 -0500, Callum Lerwick wrote: > The tests should be automated and run daily, hourly, or even before > every SCM commit. Ideally, *nothing* would be allowed to be > committed/pushed into the repo that broke the tests. That would also > provide motivation to keep the test scripts working... > > All reported bugs should be turned into automated test cases if possible > and added to the test suite. This ensures once something is fixed it > never breaks, ever again. Regressions are not an option. If you actually look at the test cases we've been doing, this is not practical. How do you automate "you should see a smooth graphical boot process that fades into GDM"? Or looking at the smoothness of video playback? I like the idea of automated testing, but a lot of stuff - especially the really critical stuff to do with hardware - is not amenable to automation. Or at least it would be extremely difficult. > The lack of automated testing in the project is saddening. However a lot > of the problem is hardware. We really need a diverse hardware testing > lab. As it is, testing seems to get done on the latest shinyest hardware > of the month that the (paid) developers just bought, leaving those of us > who don't buy new hardware every month/year in a dustpile of > regressions. Will Woods and Jesse Keating are working very hard on automated QA topics, but as I said, I just don't think automation can ever be the answer for some areas. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org http://www.happyassassin.net -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list