Re: future f12 test days

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2009-04-03 at 10:51 -0700, Adam Williamson wrote:
> On Fri, 2009-04-03 at 03:35 -0500, Callum Lerwick wrote:
> 
> > The tests should be automated and run daily, hourly, or even before
> > every SCM commit. Ideally, *nothing* would be allowed to be
> > committed/pushed into the repo that broke the tests. That would also
> > provide motivation to keep the test scripts working...
> > 
> > All reported bugs should be turned into automated test cases if possible
> > and added to the test suite. This ensures once something is fixed it
> > never breaks, ever again. Regressions are not an option.
> 
> If you actually look at the test cases we've been doing, this is not
> practical. How do you automate "you should see a smooth graphical boot
> process that fades into GDM"?

You do low level unit tests of all render paths. Readback from the frame
buffer.

> Or looking at the smoothness of video
> playback?

Yes, some things can't be fully automated, but you can still automate
them to the point where a test suite tells a user what to look for,
displays a test case, and asks the user "Hey did that look right?". Two
clicks and ~10sec per test.

Out of all the tests on radeon test day, only the XV test, DPMS and
multihead tests are the only one's that couldn't be easily automated.

Though with enough money, you could automate those too, with the use of
some cameras and video capture hardware, or for more $$$ high speed
digital sampling hardware... Get a hardware hacker to instrument the
laptop's backlight...

How much is stability worth to Red Hat Inc?

> I like the idea of automated testing, but a lot of stuff - especially
> the really critical stuff to do with hardware - is not amenable to
> automation. Or at least it would be extremely difficult.
> 
> > The lack of automated testing in the project is saddening. However a lot
> > of the problem is hardware. We really need a diverse hardware testing
> > lab. As it is, testing seems to get done on the latest shinyest hardware
> > of the month that the (paid) developers just bought, leaving those of us
> > who don't buy new hardware every month/year in a dustpile of
> > regressions.
> 
> Will Woods and Jesse Keating are working very hard on automated QA
> topics, but as I said, I just don't think automation can ever be the
> answer for some areas.

... Really what I have in mind here is stuff like "Video playback locks
up the machine", "OpenGL locks up the machine", "Second Life hangs the X
server", "World of Warcraft crashes the X server", "This web site
crashes the X server", "Rosegarden crashes the X server". All of which
are entirely automateable, though the "hard locks the machine" cases
will require some sort of hardware watchdog arrangement...

https://bugzilla.redhat.com/show_bug.cgi?id=441665
https://bugzilla.redhat.com/show_bug.cgi?id=474973
https://bugzilla.redhat.com/show_bug.cgi?id=474977
https://bugzilla.redhat.com/show_bug.cgi?id=487432

At any rate, *some* testing is way better than none at all.

Attachment: signature.asc
Description: This is a digitally signed message part

-- 
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux