Re: [Autotest] [RFC] KVM test: Refactoring the kvm control file and the config file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> The advantages I see are: 1. it more closely follows the current
> autotest structure/layout, 2. solves the problem of separating each test
> out of the ever growing kvm_test.py and gives a sub dir of each test for
> better structure (something we have been talking about) and 3. addresses
> the config vs. control file ? that this thread originally brought up.
>
> I think the issue is in how the "kvm test" is viewed.  Is it one test
> that gets run against several configurations, or is it several different
> tests with different configurations?.  I have been looking at it as the
> later however I do also see it the other way as well.

I think if you try to force everything you do into one test, you'll lose
a lot of the power and flexibility of the system. I can't claim to have
entirely figured out what you're doing, but it seems somewhat like
you're reinventing some stuff with the current approach?

Some of the general design premises:
   1) Anything the user might want to configure should be in the control file
   2) Anything in test should be really pretty static.
   3) The way we get around a lot of the conflicts is by passing parameters
       to run_test, though leaving sensible defaults in for them makes things
       much easier to use.
   4) The frontend and cli are designed to allow you to edit control files,
       and/or save custom versions - that's the single object we throw
       to machines under test ... there's no passing of cfg files to clients?

We often end up with longer control files that contain a pre-canned set of
tests, and even "meta-control files" that kick off a multitude of jobs across
thousands of machines, using frontend.py. That can include control flow -
for example our internal kernel testing uses a waterfall model with several
steps:

1. Compile the kernel from source
2. Test on a bunch of single machines with a smoketest that takes an
hour or so.
3. Test on small groups of machines with cut down simulations of
cluster tests
4. Test on full clusters.

If any of those tests fails (with some built in fault tolerance for a small
hardware fallout rate), we stop the testing. All of that control flow
is governed by a control file. It sounds complex, but it's really not
if you build your "building blocks" carefully, and it's extremely powerful

> So maybe the solution is a little different than my first thought....
>
> - all kvm tests are in $AUTOTEST/client/kvm_tests/
> - all kvm tests inherent form $AUTOTEST/client/common_lib/kvm_test.py
> - common functionality is in $AUTOTEST/client/common_lib/kvm_test_utils/
>  - does *not* include generic kvm_test.cfg
> - we keep the $AUTOTEST/client/kvm/ test dir which defines the test runs
> and houses kvm_test.cfg file and a master control.
>  - we could then define a couple sample test runs: full, quick, and
> others  or implement something like your kvm_tests.common file that
> other test runs can build on.

Are all of your tests exclusive to KVM? I would think you'd want to be able
to run any "normal" test inside a KVM environment too?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux