[ This is not the correct patch to blame, but there is something going on here which I don't understand so this email is more about me learning rather than reporting bugs. - dan ] Hello Ben Gardon, The patch 531810caa9f4: "KVM: x86/mmu: Use an rwlock for the x86 MMU" from Feb 2, 2021, leads to the following static checker warning: arch/x86/kvm/mmu/mmu.c:5769 kvm_mmu_zap_all() warn: sleeping in atomic context arch/x86/kvm/mmu/mmu.c 5756 void kvm_mmu_zap_all(struct kvm *kvm) 5757 { 5758 struct kvm_mmu_page *sp, *node; 5759 LIST_HEAD(invalid_list); 5760 int ign; 5761 5762 write_lock(&kvm->mmu_lock); ^^^^^^^^^^^^^^^^^^^^^^^^^^ This line bumps the preempt count. 5763 restart: 5764 list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { 5765 if (WARN_ON(sp->role.invalid)) 5766 continue; 5767 if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) 5768 goto restart; --> 5769 if (cond_resched_rwlock_write(&kvm->mmu_lock)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This line triggers a sleeping in atomic warning. What's going on here that I'm not understanding? 5770 goto restart; 5771 } 5772 5773 kvm_mmu_commit_zap_page(kvm, &invalid_list); 5774 5775 if (is_tdp_mmu_enabled(kvm)) 5776 kvm_tdp_mmu_zap_all(kvm); 5777 5778 write_unlock(&kvm->mmu_lock); 5779 } regards, dan carpenter