Thank you Christophe for your idea, it led me in the right direction. I found that the root cause is actually the default value of halt_pool_ns (200000ns --> 200us). "The KVM halt polling system provides a feature within KVM whereby the latency of a guest can, under some circumstances, be reduced by polling in the host for some time period after the guest has elected to no longer run by cedeing." When the cyclictest interval is larger than halt_poll_ns, then the polling does not help (it's never interrupted) and the growing/shrinking algorithm makes the interval go to 0 ("In the event that the total block time was greater than the global max polling interval then the host will never poll for long enough (limited by the global max) to wakeup during the polling interval so it may as well be shrunk in order to avoid pointless polling."). But when the cyclictest interval starts becoming smaller than halt_poll_ns, then a wakeup source is received within polling... "During polling if a wakeup source is received within the halt polling interval, the interval is left unchanged.", and so polling continues with the same value, again and again, which puts us is this known situation: "Care should be taken when setting the halt_poll_ns module parameter as a large value has the potential to drive the cpu usage to 100% on a machine which would be almost entirely idle otherwise. This is because even if a guest has wakeups during which very little work is done and which are quite far apart, if the period is shorter than the global max polling interval (halt_poll_ns) then the host will always poll for the entire block time and thus cpu utilisation will go to 100%." It just to me a while to realize that halt_poll_ns = 200000ns = 200us = my problematic cyclictest interval threshold... if I set halt_poll_ns to 100000 (and restart the vm, that's important), then the 200us cyclictest interval works fine...