Re: [PATCH RESEND RFC v4 5/6] qemu: Implement cfs_period and cfs_quota's modification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 21, 2011 at 10:11:32AM +0800, Wen Congyang wrote:
> @@ -5851,7 +5953,6 @@ static int qemuSetSchedulerParametersFlags(virDomainPtr dom,
>          virTypedParameterPtr param = &params[i];
>  
>          if (STREQ(param->field, "cpu_shares")) {
> -            int rc;
>              if (param->type != VIR_TYPED_PARAM_ULLONG) {
>                  qemuReportError(VIR_ERR_INVALID_ARG, "%s",
>                                  _("invalid type for cpu_shares tunable, expected a 'ullong'"));
> @@ -5870,19 +5971,47 @@ static int qemuSetSchedulerParametersFlags(virDomainPtr dom,
>              }
>  
>              if (flags & VIR_DOMAIN_AFFECT_CONFIG) {
> -                persistentDef = virDomainObjGetPersistentDef(driver->caps, vm);
> -                if (!persistentDef) {
> -                    qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> -                                    _("can't get persistentDef"));
> +                vmdef->cputune.shares = params[i].value.ul;
> +            }
> +        } else if (STREQ(param->field, "cfs_period")) {

[snip]

> +        } else if (STREQ(param->field, "cfs_quota")) {


In the XML file we now have

  <cputune>
    <shares>1024</shares>
    <period>90000</period>
    <quota>0</quota>
  </cputune>


But the schedinfo parameter are being named

 cpu_shares: 1024
 cfs_period: 90000
 cfs_quota: 0

These new tunables should be named 'cpu_period' and 'cpu_quota' too.

> +static int
> +qemuGetVcpusBWLive(virDomainObjPtr vm, virCgroupPtr cgroup,
> +                   unsigned long long *period, long long *quota)
> +{
> +    virCgroupPtr cgroup_vcpu = NULL;
> +    qemuDomainObjPrivatePtr priv = NULL;
> +    int rc;
> +    int ret = -1;
> +
> +    priv = vm->privateData;
> +    if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) {
> +        /* We do not create sub dir for each vcpu */
> +        rc = qemuGetVcpuBWLive(cgroup, period, quota);
> +        if (rc < 0)
> +            goto cleanup;
> +
> +        if (*quota > 0)
> +            *quota /= vm->def->vcpus;
> +        goto out;
> +    }
> +
> +    /* get period and quota for vcpu0 */
> +    rc = virCgroupForVcpu(cgroup, 0, &cgroup_vcpu, 0);
> +    if (!cgroup_vcpu) {
> +        virReportSystemError(-rc,
> +                             _("Unable to find vcpu cgroup for %s(vcpu: 0)"),
> +                             vm->def->name);
> +        goto cleanup;
> +    }
> +
> +    rc = qemuGetVcpuBWLive(cgroup_vcpu, period, quota);
> +    if (rc < 0)
> +        goto cleanup;

This is also bad IMHO, giving different semantics for the operation
of the API, depending on whether the QEMU impl has VCPUs threads or
not. Again this just says to me that we need this to apply at the VM
level not the VCPU level.

If we really do need finer per-VCPU controls, in addition to a per-VM
control, I think that should likely be done as a separate tunable,
or separate CPU

eg, we could have 2 sets of tunables, one that applies to the VM and
the second set that adds further controls to the VCPUs.

  <cputune>
    <shares>1024</shares>
    <period>90000</period>
    <quota>0</quota>

    <vcpu_shares>1024</vcpu_shares>
    <vcpu_period>90000</vcpu_period>
    <vcpu_quota>0</vcpu_quota>
  </cputune>


Daniel.
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]