This is a followup to: https://www.redhat.com/archives/libvir-list/2019-May/msg00467.html This series implements support for an embedded driver mode for libvirt, with initial support in the QEMU and secrets drivers. In this mode of operation, the driver stores all its config and state under a private directory tree. See the individual patches for the illustrated directory hierarchy used. The intent of this embedded mode is to suit cases where the application is using virtualization as a building block for some functionality, as opposed to running traditional "full OS" builds. The long time posterchild example would be libguestfs, while a more recent example could be Kata containers. The general principal in enabling this embedded mode is that the functionality available should be identical to that seen when the driver is running inside libvirtd. This is achieved by loading the exact same driver .so module as libvirtd would load, and simply configuring it with a different directory layout. The result of this is that when running in embedded mode, the driver can still talk to other secondary drivers running inside libvirtd if desired. This is useful, for example, to connect a VM to the default virtual network. The secondary drivers can be made to operate in embedded mode as well, however, this will require some careful consideration for each driver to ensure they don't clash with each other. Thus in this series only the secret driver is enabled for embedded mode. This is required to enable use of VMs with encrypted disks, or authenticated network block storage. In this series we introduce a new command line tool 'virt-qemu-run' which is a really simple tool for launching a VM in embedded mode. I'm not entirely sure whether we should provide this as an official supported tool in this way, or merely put it into the 'examples' directory as demo-ware. With testing of the virt-qemu-run tool we can immediately see what the next important thing to tackle is: performance. We have not really cared too much about the startup performance of libvirtd as this is a one time cost when the mgmt application connects. We did none the less cache capabilities because probing caps for 30 QEMU binaries takes a long time. Even with this caching it takes an unacceptably long time to start a VM in embedded mode. About 100 ms to open the embedded QEMU driver, assuming pre-cached capabilies - ~2 seconds if not cached and all 30 QEMU targets are present. Then about 300 ms to actually start the QEMU guest. IOW, about 400 ms to get QEMU running. NB this is measuring time from launching the virt-run-qemu program, to the point at which the API call 'virDomainCreate' returns control. This has both libvirt & QEMU overhead in & I don't have clear figures to distinguish, but I can see a 40 ms delay between issuing the 'qmp_capabilities' call and getting a reply, which is QEMU startup overead. This is a i440fx based QEMU with a general purpose virtio-pci config (disk, net, etc) tyupical for running a full OS. I've not tried any kind of optimized QEMU config with microvm. I've already started on measuring & optimizing & identified several key areas that can be addressed, but it is all ultimately about not doing work before we need the answers from that work (which often means we will never do the work at all). For example, we shouldn't probe all 30 QEMU's upfront. If the app is only going to create an x86_64 KVM guest we should only care about that 1 QEMU. This is painful because parsing any guest XML requires a virCapsPtr which in turn causes probing of every QEMU binary. I've got in progress patches to eliminate virCapsPtr almost entirely and work directly with the virQEMUCapsPtr instead. It is possible we'll want to use a different file format for storing the cached QEMU capabilities, and the CPU feature/model info. Parsing this XML is a non-negligible time sink. A binary format is likely way quicker, especially if its designed to be just mmap'able for direct read. To be investigated... We shouldn't probe for whether host PM suspend is possible unless someone wants that info, or tries to issue that API call. After starting QEMU we spend 150-200 ms issuing a massive number of qom-get calls to check whether QEMU enabled each individual CPU feature flag. We only need this info if someone asks for the live XML or we intend to live migrate etc. So we shouldn't issue these qom-get calls in the "hot path" of QEMU startup. It can be done later in a non-time critical point. Also the QEMU API for this is horribly inefficient to require so many qom-get calls. There's more but I won't talk about it now. Suffice to say that I think we can get libvirt overhead down to less than 100 ms fairly easily and probably even down to less than 50 ms without much effort. The exact figure will depend on what libvirt features you want enabled, and how much work we want/need to put into optimization. We'll want to fix the really gross mistakes & slow downs, but we'll want guidance from likely users as to their VM startup targets to decide how much work needs investing. This optimization will ultimately help non-embedded QEMU mode too, making it faster to respond & start. Changed in v2: - Use a simplified directory layout for embedded mode. Previously we just put a dir prefix onto the normal paths. This has the downside that the embedded drivers paths are needlessly different for privileged vs unprivileged user. It also results in very long paths which can be a problem for the UNIX socket name length limits. - Also ported the secret driver to support embedded mode - Check to validate that the event loop is registered. - Add virt-qemu-run tool for embedded usage. - Added docs for the qemu & secret driver explaining embedded mode Daniel P. Berrangé (7): access: report an error if no access manager is present libvirt: pass a directory path into drivers for embedded usage event: add API for requiring an event loop impl to be registered libvirt: support an "embed" URI path selector for opening drivers qemu: add support for running QEMU driver in embedded mode secrets: add support for running secret driver in embedded mode qemu: introduce a new "virt-qemu-run" program build-aux/syntax-check.mk | 2 +- docs/drivers.html.in | 1 + docs/drvqemu.html.in | 84 +++++++ docs/drvsecret.html.in | 82 +++++++ libvirt.spec.in | 2 + po/POTFILES.in | 1 + src/Makefile.am | 9 + src/access/viraccessmanager.c | 5 + src/driver-state.h | 2 + src/driver.h | 2 + src/interface/interface_backend_netcf.c | 7 + src/interface/interface_backend_udev.c | 7 + src/libvirt.c | 93 ++++++- src/libvirt_internal.h | 4 +- src/libxl/libxl_driver.c | 7 + src/lxc/lxc_driver.c | 8 + src/network/bridge_driver.c | 7 + src/node_device/node_device_hal.c | 7 + src/node_device/node_device_udev.c | 7 + src/nwfilter/nwfilter_driver.c | 7 + src/qemu/Makefile.inc.am | 26 ++ src/qemu/qemu_conf.c | 38 ++- src/qemu/qemu_conf.h | 6 +- src/qemu/qemu_driver.c | 21 +- src/qemu/qemu_process.c | 15 +- src/qemu/qemu_shim.c | 313 ++++++++++++++++++++++++ src/qemu/qemu_shim.pod | 94 +++++++ src/remote/remote_daemon.c | 1 + src/remote/remote_driver.c | 1 + src/secret/secret_driver.c | 41 +++- src/storage/storage_driver.c | 7 + src/util/virevent.c | 25 ++ src/util/virevent.h | 2 + src/vz/vz_driver.c | 7 + tests/domaincapstest.c | 2 +- tests/testutilsqemu.c | 2 +- 36 files changed, 920 insertions(+), 25 deletions(-) create mode 100644 docs/drvsecret.html.in create mode 100644 src/qemu/qemu_shim.c create mode 100644 src/qemu/qemu_shim.pod -- 2.23.0 -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list