Hey everyone, we are developing the KVM backend for VirtualBox [0] and wanted to reach out regarding some weird behavior. We are using `timer_create` to deliver timer events to vCPU threads as signals. We mask the signal using pthread_sigmask in the host vCPU thread and unmask them for guest execution using KVM_SET_SIGNAL_MASK. This method of handling timers works well and gives us very low latency as opposed to using a separate thread that handles timers. As far as we can tell, neither Qemu nor other VMMs use such a setup. We see two issues: When we enable nested virtualization, we see what looks like corruption in the nested guest. The guest trips over exceptions that shouldn't be there. We are currently debugging this to find out details, but the setup is pretty painful and it will take a bit. If we disable the timer signals, this issue goes away (at the cost of broken VBox timers obviously...). This is weird and has left us wondering, whether there might be something broken with signals in this scenario, especially since none of the other VMMs uses this method. The other issue is that we have a somewhat sad interaction with split-lock detection, which I've blogged about some time ago [1]. Long story short: When you program timers <10ms into the future, you run the risk of making no progress anymore when the guest triggers the split-lock punishment [2]. See the blog post for details. I was wondering whether there is a better solution here than disabling the split-lock detection or whether our approach here is fundamentally broken. Looking forward to your thoughts. :) Thanks! Julian [0] https://github.com/cyberus-technology/virtualbox-kvm [1] https://x86.lol/generic/2023/11/07/split-lock.html [2] https://elixir.bootlin.com/linux/v6.9-rc1/source/arch/x86/kernel/cpu/intel.c#L1137