The memory allocator is fully preemptible and therefore cannot be invoked from truly atomic contexts. See Documentation/locking/locktypes.rst (line: 470) Add raw_spin_unlock() before memory allocation and raw_spin_lock() after it. Signed-off-by: Yajun Deng <yajun.deng@xxxxxxxxx> --- arch/x86/kernel/kvm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index d0bb2b3fb305..8f8ec9bbd847 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -205,7 +205,9 @@ void kvm_async_pf_task_wake(u32 token) * async PF was not yet handled. * Add dummy entry for the token. */ - n = kzalloc(sizeof(*n), GFP_ATOMIC); + raw_spin_unlock(&b->lock); + n = kzalloc(sizeof(*n), GFP_KERNEL); + raw_spin_lock(&b->lock); if (!n) { /* * Allocation failed! Busy wait while other cpu -- 2.25.1