On Tue, Aug 16, 2022, Peter Xu wrote: > Since at it, renaming kvm_handle_bad_page to kvm_handle_error_pfn assuming Please put parantheses after function names, e.g. kvm_handle_bad_page(). > that'll match better with what it does, e.g. KVM_PFN_ERR_SIGPENDING is not > accurately a bad page but just one kind of errors. ... > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 3e1317325e1f..23dc46da2f18 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3134,8 +3134,13 @@ static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct * > send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); > } > > -static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) > +static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) > { > + if (is_sigpending_pfn(pfn)) { > + kvm_handle_signal_exit(vcpu); > + return -EINTR; > + } ... > @@ -2648,9 +2651,12 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, > if (atomic) > return KVM_PFN_ERR_FAULT; > > - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); > + npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, > + writable, &pfn); > if (npages == 1) > return pfn; > + if (npages == -EINTR) > + return KVM_PFN_ERR_SIGPENDING; This patch should be split into 3 parts: 1. Add KVM_PFN_ERR_SIGPENDING and the above code 2. Add the interruptible flag 3. Add handling in x86 and rename kvm_handle_bad_page() With #3 merged with patch 3. That was if there's oddball arch code that reacts poorly to KVM_PFN_ERR_SIGPENDING, those errors will bisect to #1. And if there's a typo in the plumbing, that bisects to #2. And if something goes sideways in x86, those bugs bisect to #3 (patch 3), and it's easy to revert just the x86 changes (though I can't imagine that's likely).