Re: [PATCH] [Autotest] [KVM-AUTOTEST] fix tap interface for parallel execution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- "Yogananth Subramanian" <anantyog@xxxxxxxxxxxxxxxxxx> wrote:

> Adds support to create guests with different MAC address during
> parallel
> execution of autotest, this is done by creating worker dicts with
> different "address_index"
> 
> Signed-off-by: Yogananth Subramanian <anantyog@xxxxxxxxxxxxxxxxxx>
> ---
>  client/tests/kvm/kvm_scheduler.py |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/client/tests/kvm/kvm_scheduler.py
> b/client/tests/kvm/kvm_scheduler.py
> index 93b7df6..9000391 100644
> --- a/client/tests/kvm/kvm_scheduler.py
> +++ b/client/tests/kvm/kvm_scheduler.py
> @@ -33,7 +33,8 @@ class scheduler:
>          # "Personal" worker dicts contain modifications that are applied
>          # specifically to each worker.  For example, each worker must use a
>          # different environment file and a different MAC address pool.
> -        self.worker_dicts = [{"env": "env%d" % i} for i in range(num_workers)]
> +        self.worker_dicts = [{"env": "env%d" % i, "address_index": i-1} 
> +                             for i in range(num_workers)]

This approach won't work in the general case -- some tests use more than 1 VM
and each VM requires a different address_index.

address_pools.cfg defines, for each host, a MAC address pool.
Every pool consists of several contiguous ranges, and looks something like this:

address_ranges = r1 r2 r3

address_range_base_mac_r1 = 52:54:00:12:34:56
address_range_size_r1 = 20

address_range_base_mac_r2 = 52:54:00:12:80:00
address_range_size_r2 = 20

... (more ranges here)

The pool itself needs to be split between the parallel workers, so that each
worker has its own completely separate pool.  In other words, the parameters
address_ranges, address_range_base_mac_* and address_range_size_* need to be
modified in 'self.worker_dicts', not address_index.

For example, if a pool has 2 ranges:
r1                r2
------------      -------------
and there are 3 workers, the pool needs to be distributed evenly like this:
r1      r2        r3    r4
------- ----      ----- -------
so that worker A gets r1, worker B gets [r2, r3] and worker C gets r4.

This shouldn't be very hard.  I'll see if I can work on a patch that will do this.

>  
>  
>      def worker(self, index, run_test_func):
> -- 
> 1.6.0.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux