Re: [PATCH v2 5/5] qemu: Check for thread <=> memory alignment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 07, 2016 at 02:12:32PM +0200, Martin Kletzander wrote:
> Some settings may be confusing and in case users use numad placement in
> combination with static placement we could warn them as it might not be
> wanted (but it's not forbidden).
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1254402
> 
> Signed-off-by: Martin Kletzander <mkletzan@xxxxxxxxxx>
> ---
>  src/qemu/qemu_process.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 74 insertions(+), 1 deletion(-)
> 
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index 0aab01fd4d50..c012b6efcab6 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -2304,6 +2304,76 @@ qemuProcessSetLinkStates(virQEMUDriverPtr driver,
>  }
> 
> 
> +static int
> +qemuProcessCheckCpusMemsAlignment(virQEMUDriverPtr driver,
> +                                  virDomainObjPtr vm,
> +                                  virBitmapPtr cpumask,
> +                                  const char *mem_mask)
> +{
> +    int ret = -1;
> +    int hostnodes = 0;
> +    char *cpumask_str = NULL;
> +    char *tmpmask_str = NULL;
> +    char *mem_cpus_str = NULL;
> +    virCapsPtr caps = NULL;
> +    virBitmapPtr tmpmask = NULL;
> +    virBitmapPtr mem_cpus = NULL;
> +    virBitmapPtr mem_nodes = NULL;
> +    virDomainNumatuneMemMode mem_mode;
> +
> +    if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) != 0)
> +        return 0;
> +
> +    if (mem_mode != VIR_DOMAIN_NUMATUNE_MEM_STRICT)
> +        return 0;
> +
> +    if (!mem_mask || !cpumask)
> +        return 0;
> +
> +    if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
> +        goto cleanup;
> +
> +    if (!(tmpmask = virBitmapNewCopy(cpumask)))
> +        goto cleanup;
> +
> +    if ((hostnodes = virNumaGetMaxNode()) < 0)
> +        goto cleanup;
> +
> +    if (virBitmapParse(mem_mask, &mem_nodes, hostnodes) < 0)
> +        goto cleanup;
> +
> +    if (!(mem_cpus = virCapabilitiesGetCpusForNodemask(caps, mem_nodes)))
> +        goto cleanup;
> +
> +    virBitmapSubtract(tmpmask, mem_cpus);
> +    if (!virBitmapIsAllClear(tmpmask)) {
> +        if (!(cpumask_str = virBitmapFormat(cpumask)))
> +            goto cleanup;
> +
> +        if (!(tmpmask_str = virBitmapFormat(tmpmask)))
> +            goto cleanup;
> +
> +        if (!(mem_cpus_str = virBitmapFormat(mem_cpus)))
> +            goto cleanup;
> +
> +        VIR_WARN("CPUs '%s' in cpumask '%s' might not have access to any NUMA "
> +                 "node in memory's nodeset '%s' which consists of CPUs: '%s'.",
> +                 tmpmask_str, cpumask_str, mem_mask, mem_cpus_str);

We've had a general principle that we don't use VIR_WARN for this kind of
thing, because libvirtd logs are genrally invisible to the person who is
making the mistake. Meanwhile if this is intentional, we're spamming the
logs for a situation the user explicitly chose.

So NACK to the entire patch, as it doesn't do anything useful IMHO.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]