On Wed, Sep 13, 2023, isaku.yamahata@xxxxxxxxx wrote: > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > When resolving kvm page fault and hwpoisoned page is given, KVM exit > with HWPOISONED flag so that user space VMM, e.g. qemu, handle it. > > - Add a new flag POISON to KVM_EXIT_MEMORY_FAULT to indicate the page is > poisoned. > - Make kvm_gmem_get_pfn() return hwpoison state by -EHWPOISON when the > folio is hw-poisoned. > - When page is hw-poisoned on faulting in private gmem, return > KVM_EXIT_MEMORY_FAULT with HWPOISONED flag. > > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > --- ... > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > index eb900344a054..48329cb44415 100644 > --- a/include/uapi/linux/kvm.h > +++ b/include/uapi/linux/kvm.h > @@ -527,7 +527,8 @@ struct kvm_run { > } notify; > /* KVM_EXIT_MEMORY_FAULT */ > struct { > -#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) > +#define KVM_MEMORY_EXIT_FLAG_PRIVATE BIT_ULL(3) > +#define KVM_MEMORY_EXIT_FLAG_HWPOISON BIT_ULL(4) Rather than add a flag, I think we should double down on returning -1 + errno when exiting with vcpu->run->exit_reason == KVM_EXIT_MEMORY_FAULT, as is being proposed in Anish's series for accelerating UFFD-like behavior in KVM[*]. Then KVM can simply return -EFAULT or -EHWPOISON to communicate why KVM is existing at a higher level, and let the kvm_run structure provide the finer details about the access itself. E.g. kvm_faultin_pfn_private() can simply propagate the return value from kvm_gmem_get_pfn() without having to identify *why* kvm_gmem_get_pfn() failed. static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int max_order, r; if (!kvm_slot_can_be_private(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } ... } [*] https://lore.kernel.org/all/20230908222905.1321305-5-amoorthy@xxxxxxxxxx