Installed the qemu-kvm-ev but the behavior is the same. Running perf record -a -g on the baremetal shows that most of the CPU time is in _raw_spin_lock Children Self Command Shared Object Symbol - 93.62% 93.62% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock - _raw_spin_lock + 45.30% kvm_mmu_sync_roots + 28.49% kvm_mmu_load + 25.00% mmu_free_roots + 1.12% tdp_page_fault On Wed, Aug 17, 2016 at 7:35 PM, Laurentiu Soica <laurentiu@xxxxxxxx> wrote: > Hello, > > I have an openstack setup with KVM as hipervisor in virtual environment. > > So: > - a baremetal with 2 physical CPUs: 128 GB RAM > - the compute node, that is a VM under KVM on the baremetal, with 100 > GB RAM and 36 vCPUs > - 15 VMs running inside the compute node, under KVM aswell. > > The baremetal and compute are both running CentOS 7 and on the > baremetal I've enabled nested KVM feature. > Kernel is 3.10.0-327.22.2.el7.x86_64 > qemu-kvm is 1.5.3-105.el7_2.7 > > The issue is that after a few days of running, the compute node's > qemu-kvm process (on the baremetal) goes into full CPU usage (3600%) > and the VMs under it are no longer accessible. > The swap is disabled on the compute and there is no swap activity on > baremetal as well (aroung 100 MB used). > > When the compute goes into full CPU usage the baremetal has around > 60GB of RAM still free and the compute around 70 GB of RAM free. > > If something rings a bell or do you have some troubleshooting tips, > please let me know. > > Laurentiu -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html