Re: [PATCH v2 1/4] qemu.conf changes to support multiple memory backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you so much for the detailed comments. I'll work on those.

I believe you had a couple questions below on this one......inlined .....

On 8/6/24 07:35, Martin Kletzander wrote:
On Wed, Jun 05, 2024 at 04:37:35PM -0400, mgalaxy@xxxxxxxxxx wrote:
From: Michael Galaxy <mgalaxy@xxxxxxxxxx>

+                               _("Domain requesting configuration for %1$lu NUMA nodes, but memory backing directory only has (%2$lu) directory paths available. Either reduce this to one directory or provide more paths to use."),
+                               numa_node_count,
+                               cfg->nb_memoryBackingDirs);
+                return -1;
+            }
+
+            path_index = virBitmapNextSetBit(numaBitmap, -1);

What if the single pinning is to the third host node, but there are only
two memoryBackingDirs supplied in the config?

That case is totally supported (and expected). For example a *non* vNUMA guests (a single virtual NUMA node)
running on a standard 2-socket (2-NUMA-node) host in a typical pizza box.

If I understood that correctly, that is supported without a problem. The error above is describing the case that a virtual machine has *more* virtual NUMA nodes configured than the host has available to offer. In such a case, either the user needs to provide more host nodes or turn this feature off or use a single NUMA node (in which
case all the guest nodes would be placed in a single host node).

{
+    virQEMUDriver *driver = priv->driver;
    g_autofree char *path = NULL;
+    g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
+    size_t i;

-    if (qemuGetMemoryBackingPath(driver, vm->def, mem->info.alias, &path) < 0)
-        return -1;
+    for (i = 0; i < cfg->nb_memoryBackingDirs; i++) {
+        virDomainXMLPrivateDataCallbacks *privateData = (virDomainXMLPrivateDataCallbacks *) priv; +        if (qemuGetMemoryBackingPath(def, privateData, i, mem->info.alias, &path) < 0)
+            return -1;


I wonder why are the paths not saved in the status XML of the running
domain.  Since they can be changed during the VM's lifetime (while
restarting libvirt) it could cause issues during clean-up.  Sure,
unlink() will set errno to ENOENT, but the proper paths will not be
cleaned up.  It'd also make this part of the code easier and safer to
use from the callers.  But that's pre-existing.

I'm not sure I'm following the question on this one. I do not think we want the paths to be changing during the VM's lifetime. Are you asking if we *want*  to support that? The current libvirt XML schema does not support this. The actual path is kept hidden from the guest VM
and is only known internally to libvirt.

Can you clarify?



-    if (unlink(path) < 0 &&
-        errno != ENOENT) {
-        virReportSystemError(errno, _("Unable to remove %1$s"), path);
-        return -1;
+        if (unlink(path) < 0 && errno != ENOENT) {
+            virReportSystemError(errno, _("Unable to remove %1$s"), path);
+            return -1;
+        }
    }

    return 0;
@@ -7268,7 +7274,7 @@ qemuProcessPrepareHost(virQEMUDriver *driver,
    if (qemuProcessPrepareHostBackendChardev(vm) < 0)
        return -1;

-    if (qemuProcessBuildDestroyMemoryPaths(driver, vm, NULL, true) < 0)
+    if (qemuProcessBuildDestroyMemoryPaths(vm, NULL, true) < 0)
        return -1;

    /* Ensure no historical cgroup for this VM is lying around bogus
@@ -8482,7 +8488,7 @@ void qemuProcessStop(virQEMUDriver *driver,
        goto endjob;
    }

-    qemuProcessBuildDestroyMemoryPaths(driver, vm, NULL, false);
+    qemuProcessBuildDestroyMemoryPaths(vm, NULL, false);

    if (!!g_atomic_int_dec_and_test(&driver->nactive) && driver->inhibitCallback)
        driver->inhibitCallback(false, driver->inhibitOpaque);
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index c1ea949215..eebd0a4d1b 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -38,13 +38,12 @@ int qemuProcessStopCPUs(virQEMUDriver *driver,
                        virDomainPausedReason reason,
                        virDomainAsyncJob asyncJob);

-int qemuProcessBuildDestroyMemoryPaths(virQEMUDriver *driver,
-                                       virDomainObj *vm,
+int qemuProcessBuildDestroyMemoryPaths(virDomainObj *vm,
                                       virDomainMemoryDef *mem,
                                       bool build);

-int qemuProcessDestroyMemoryBackingPath(virQEMUDriver *driver,
-                                        virDomainObj *vm,
+int qemuProcessDestroyMemoryBackingPath(virDomainDef *def,
+                                        qemuDomainObjPrivate *priv,
                                        virDomainMemoryDef *mem);

void qemuProcessReconnectAll(virQEMUDriver *driver);
--
2.34.1


This patch changes the configuration, so it should also adjust the
documentation and since configuration is used everywhere it should also
add a few unit tests with the changed configuration and few edge cases,
just so we can be sure it really works the way it is supposed to.


Acknowledged. I did update the documentation already but I did not add
a new unit test. I can work on that......

- Michael




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux