Re: [PATCH 3/3] qemu: Return true pining info when using numad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



$SUBJ

s/pining/pinning

Or perhaps - "qemu: Use numad information when getting pin information""

On 07/26/2015 12:57 PM, Martin Kletzander wrote:
> Pinning information returned for emulatorpin and vcpupin calls is being
> returned from our data without querying cgroups for some time.  However,
> not all the data were utilized.  When automatic placement is used the
> information is not returned for the calls mentioned above.  Since the
> numad hint in private data is properly saved/restored, we can safely use
> it to return true information.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1162947
> 
> Signed-off-by: Martin Kletzander <mkletzan@xxxxxxxxxx>
> ---
>  src/qemu/qemu_driver.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 

Should qemuDomainGetIOThreadsConfig be adjusted as well?  In the for
loop that's fetching/filling in the iothreadid there's a filling of the
cpumask as well.

Patches seem reasonable otherwise, although patch2 could have a wee bit
more information in the commit log to explain what's being done...
Beyond that does that value matter if placement_mode !=
VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO? or if
!virDomainDefNeedsPlacementAdvice (from qemuProcessStart)?  Was checking
where it was set and if it's set to something reasonable...

John

> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 40c882c4ba88..1e090bb5c36b 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -5224,6 +5224,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
>      int ret = -1;
>      int hostcpus, vcpu;
>      virBitmapPtr allcpumap = NULL;
> +    qemuDomainObjPrivatePtr priv = NULL;
> 
>      virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
>                    VIR_DOMAIN_AFFECT_CONFIG, -1);
> @@ -5244,6 +5245,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
>          goto cleanup;
> 
>      virBitmapSetAll(allcpumap);
> +    priv = vm->privateData;
> 
>      /* Clamp to actual number of vcpus */
>      if (ncpumaps > def->vcpus)
> @@ -5262,6 +5264,9 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
> 
>          if (pininfo && pininfo->cpumask)
>              bitmap = pininfo->cpumask;
> +        else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO &&
> +                 priv->autoCpuset)
> +            bitmap = priv->autoCpuset;
>          else
>              bitmap = allcpumap;
> 
> @@ -5412,6 +5417,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
>      int hostcpus;
>      virBitmapPtr cpumask = NULL;
>      virBitmapPtr bitmap = NULL;
> +    qemuDomainObjPrivatePtr priv = NULL;
> 
>      virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
>                    VIR_DOMAIN_AFFECT_CONFIG, -1);
> @@ -5428,10 +5434,15 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
>      if ((hostcpus = nodeGetCPUCount(NULL)) < 0)
>          goto cleanup;
> 
> +    priv = vm->privateData;
> +
>      if (def->cputune.emulatorpin) {
>          cpumask = def->cputune.emulatorpin;
>      } else if (def->cpumask) {
>          cpumask = def->cpumask;
> +    } else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO &&
> +               priv->autoCpuset) {
> +        cpumask = priv->autoCpuset;
>      } else {
>          if (!(bitmap = virBitmapNew(hostcpus)))
>              goto cleanup;
> 

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]