Re: [PATCH v2 3/3] kvm/x86: Allow to respond to generic signals during slow page faults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 20, 2022, Peter Xu wrote:
> All the facilities should be ready for this, what we need to do is to add a
> new "interruptible" flag showing that we're willing to be interrupted by
> common signals during the __gfn_to_pfn_memslot() request, and wire it up
> with a FOLL_INTERRUPTIBLE flag that we've just introduced.
> 
> Note that only x86 slow page fault routine will set this to true.  The new
> flag is by default false in non-x86 arch or on other gup paths even for
> x86.  It can actually be used elsewhere too but not yet covered.
> 
> When we see the PFN fetching was interrupted, do early exit to userspace
> with an KVM_EXIT_INTR exit reason.
> 
> Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
> ---
>  arch/arm64/kvm/mmu.c                   |  2 +-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c    |  2 +-
>  arch/powerpc/kvm/book3s_64_mmu_radix.c |  2 +-
>  arch/x86/kvm/mmu/mmu.c                 | 16 ++++++++++++--
>  include/linux/kvm_host.h               |  4 ++--
>  virt/kvm/kvm_main.c                    | 30 ++++++++++++++++----------
>  virt/kvm/kvm_mm.h                      |  4 ++--
>  virt/kvm/pfncache.c                    |  2 +-
>  8 files changed, 41 insertions(+), 21 deletions(-)

I don't usually like adding code without a user, but in this case I think I'd
prefer to add the @interruptible param and then activate x86's kvm_faultin_pfn()
in a separate patch.  It's rather difficult to tease out the functional x86
change, and that would also allow other architectures to use the interruptible
support without needing to depend on the functional x86 change.

And maybe squash the addition of @interruptible with the previous patch?  I.e.
add all of the infrastructure for KVM_PFN_ERR_SIGPENDING in patch 2, then use it
in x86 in patch 3.

> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 17252f39bd7c..aeafe0e9cfbf 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3012,6 +3012,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
>  static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
>  			       unsigned int access)
>  {
> +	/* NOTE: not all error pfn is fatal; handle sigpending pfn first */
> +	if (unlikely(is_sigpending_pfn(fault->pfn))) {

Move this into kvm_handle_bad_page(), then there's no need for a comment to call
out that this needs to come before the is_error_pfn() check.  This _is_ a "bad"
PFN, it just so happens that userspace might be able to resolve the "bad" PFN.

> +		vcpu->run->exit_reason = KVM_EXIT_INTR;
> +		++vcpu->stat.signal_exits;
> +		return -EINTR;

For better or worse, kvm_handle_signal_exit() exists and can be used here.  I
don't love that KVM details bleed into xfer_to_guest_mode_work(), but that's a
future problem.

I do think that the "return -EINTR" should be moved into kvm_handle_signal_exit(),
partly for code reuse and partly because returning -EINTR is very much KVM ABI.
Oof, but there are a _lot_ of paths that can use kvm_handle_signal_exit(), and
some of them don't select KVM_XFER_TO_GUEST_WORK, i.e. kvm_handle_signal_exit()
should be defined unconditionally.  I'll work on a series to handle that separately,
no reason to take a dependency on that cleanup.

So for now,

static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
{
	if (pfn == KVM_PFN_ERR_SIGPENDING) {
		kvm_handle_signal_exit(vcpu);
		return -EINTR;
	}

	...
}




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux