On Mon, Mar 18, 2024 at 01:27:00PM +0100, Michal Prívozník wrote:
On 3/16/24 15:24, aheath1992@xxxxxxxxx wrote:So like in oVirt,VMware, and others VM managers, I would like a tool or application where I can say if the hypervisor resource (let's say CPU in this case) is over 80% utilization, I want the tools, script, or application to check the other hypervisors and migrate vms to them, balancing the load and resources across all avaliable hypervisors.
It is likely that the VM will need some conversion between different hypervisors, both in the definition (due to availability of emulated HW) and in the guest itself. One such tool is virt-v2v [1] (part of guestfs-tools, under libguestfs), but that will not be a migration, definitely not a live one.
Not that I know of any such tool, but there is a website of tools known to use libvirt [1]. You may find what you're looking for in one of them. But I have a hunch you won't. It's not as simple as it sounds. What host to chose from the pool of possible destinations and guarantee selected resource utilization? How homogeneous individual hosts in the pool are? If a VM is using too much resources and needs to be migrated - what strategy should be used to make sure the migration converges? To me, this sounds like job for a high-availability tool.
If you want something like what Michal suggested here ^^ then you probably need to build your cloud *on top of* the various providers you have (VMWare, Azure, EC2) and then run all the VMs in that "cloud". There will likely be some overhead and other issues as well. Maybe KubeVirt could solve that issue O:-) [1] https://libguestfs.org/virt-v2v.1.html
Michal 1: https://libvirt.org/apps.html _______________________________________________ Users mailing list -- users@xxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxx
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Users mailing list -- users@xxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxx