On 07/21/2009 08:21 PM, Martin Bligh wrote:
The advantages I see are: 1. it more closely follows the current
autotest structure/layout, 2. solves the problem of separating each test
out of the ever growing kvm_test.py and gives a sub dir of each test for
better structure (something we have been talking about) and 3. addresses
the config vs. control file ? that this thread originally brought up.
I think the issue is in how the "kvm test" is viewed. Is it one test
that gets run against several configurations, or is it several different
tests with different configurations?. I have been looking at it as the
later however I do also see it the other way as well.
I think if you try to force everything you do into one test, you'll lose
a lot of the power and flexibility of the system. I can't claim to have
entirely figured out what you're doing, but it seems somewhat like
you're reinventing some stuff with the current approach?
Some of the general design premises:
1) Anything the user might want to configure should be in the control file
2) Anything in test should be really pretty static.
3) The way we get around a lot of the conflicts is by passing parameters
to run_test, though leaving sensible defaults in for them makes things
much easier to use.
4) The frontend and cli are designed to allow you to edit control files,
and/or save custom versions - that's the single object we throw
to machines under test ... there's no passing of cfg files to clients?
We often end up with longer control files that contain a pre-canned set of
tests, and even "meta-control files" that kick off a multitude of jobs across
thousands of machines, using frontend.py. That can include control flow -
for example our internal kernel testing uses a waterfall model with several
steps:
1. Compile the kernel from source
2. Test on a bunch of single machines with a smoketest that takes an
hour or so.
3. Test on small groups of machines with cut down simulations of
cluster tests
4. Test on full clusters.
If any of those tests fails (with some built in fault tolerance for a small
hardware fallout rate), we stop the testing. All of that control flow
is governed by a control file. It sounds complex, but it's really not
if you build your "building blocks" carefully, and it's extremely powerful
+1
The highly flexible config file currently serves client mode tests.
We need to slowly shift functionality into the server while keeping the
current advantages and simplicity of the client.
Martin, can you give some links to the above meta control?
So maybe the solution is a little different than my first thought....
- all kvm tests are in $AUTOTEST/client/kvm_tests/
- all kvm tests inherent form $AUTOTEST/client/common_lib/kvm_test.py
- common functionality is in $AUTOTEST/client/common_lib/kvm_test_utils/
- does *not* include generic kvm_test.cfg
- we keep the $AUTOTEST/client/kvm/ test dir which defines the test runs
and houses kvm_test.cfg file and a master control.
- we could then define a couple sample test runs: full, quick, and
others or implement something like your kvm_tests.common file that
other test runs can build on.
Are all of your tests exclusive to KVM? I would think you'd want to be able
to run any "normal" test inside a KVM environment too?
There are several autotest tests that run inside the guest today too.
Today the config file controls their execution. It would be nice if
we'll create dependency using the server tests, that first installs VM,
boot it and then runs various 'normal' tests inside of it.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html