Hello all, I have a high latency in KVM with Linux 5.15 that I did not have in 4.19. Previously I was on version 4.19.188-rt77 (with the PREEMPT_RT and CONFIG_PREEMPT_FULL patch), I was running KVM on an isolated CPU and I had a max latency with cyclictest of ~60 µs. I have bumped the kernel version to 5.15.13-rt26 (on both host and guest) and I have now a huge maximal latency of 40 ms. I try to tweak halt_poll_ns, and I managed to reduce the latency to 4 ms. The host max latency outside KVM is the same. When I used LTTng to know what’s happen, I see that periodically the CPU core running KVM enter in idle mode. I have tried with 3 different Intel CPU (Intel Core i3-4130, Intel Xeon E5-2680 and Intel Xeon X5660) and I have always the same result. Here is my setup : Intel CPU KVM runing on a dedecated CPU with real-time priority Kernel parameters : isolcpus=2-3 nohz_full=2-3 rcu_nocbs=2-3 irqaffinity=0-1 CONFIG_CPU_IDLE not set qemu-x86_64 version 4.2.0 cyclictest command : cyclictest -l1000000 -m -Sp90 -i200 -h200 Qemu parameters /usr/bin/qemu-system-x86_64 -name guest=guest0,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-guest0/master-key.aes -blockdev {"driver":"file","filename":"/usr/share/qemu/edk2-x86_64-code.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/guest0_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine pc-i440fx-4.1,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format -cpu host,tsc-deadline=on,pmu=off -m 256 -overcommit mem-lock=off -smp 1,sockets=1,dies=1,cores=1,threads=1 -uuid 06ed47d1-6dc8-437b-a655-c578768dd0c2 -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,fd=35,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,reboot-timeout=0,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/seapath-guest-efi-test-image-votp-vm.wic.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=libvirt-1-format,id=virtio-disk0,bootindex=1 -netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=38 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:34:56:4d,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x4 -device ib700,id=watchdog0 -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 -msg timestamp=on Results : Kernel host : 4.19.188-rt77 Kernel guest : 4.19.188-rt77 halt_poll_ns : default value Max latency : ~60 µs Kernel host : 5.15.13-rt26 Kernel guest : 5.15.13-rt26 halt_poll_ns : default value Max latency : > 40 ms Kernel host : 5.15.13-rt26 Kernel guest : 4.19.188-rt77 halt_poll_ns : default value Max latency : > 40 ms Kernel host : 5.15.13-rt26 Kernel guest : 5.15.13-rt26 halt_poll_ns : 50000 Max latency : > 4 ms I would like to know what is introducing this latency. Is it related to the fact that the CPU running KVM periodically enters IDLE mode? Why do we have this behavior in 5.15 and not in 4.19? Thanks