RE: [PATCH v3 17/17] KVM: x86/xen: Add event channel interrupt vector upcall

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Joao Martins <joao.m.martins@xxxxxxxxxx>
> Sent: 14 December 2020 13:20
> To: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>; Ankur Arora <ankur.a.arora@xxxxxxxxxx>; Boris Ostrovsky
> <boris.ostrovsky@xxxxxxxxxx>; Sean Christopherson <seanjc@xxxxxxxxxx>; Graf (AWS), Alexander
> <graf@xxxxxxxxx>; Aslanidis (AWS), Ioannis <iaslan@xxxxxxxxx>; Durrant, Paul <pdurrant@xxxxxxxxxxxx>;
> Agache, Alexandru <aagch@xxxxxxxxxx>; Florescu, Andreea <fandree@xxxxxxxxxx>; kvm@xxxxxxxxxxxxxxx
> Subject: RE: [EXTERNAL] [PATCH v3 17/17] KVM: x86/xen: Add event channel interrupt vector upcall
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 12/14/20 8:39 AM, David Woodhouse wrote:
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index df44d9e50adc..e627139cf8cd 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -8896,7 +8896,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >                       kvm_x86_ops.msr_filter_changed(vcpu);
> >       }
> >
> > -     if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
> > +     if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> > +         kvm_xen_has_interrupt(vcpu)) {
> >               ++vcpu->stat.req_event;
> >               kvm_apic_accept_events(vcpu);
> >               if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
> > diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
> > index 17cbb4462b7e..4bc9da9fcfb8 100644
> > --- a/arch/x86/kvm/xen.c
> > +++ b/arch/x86/kvm/xen.c
> > @@ -176,6 +176,45 @@ void kvm_xen_setup_runstate_page(struct kvm_vcpu *v)
> >       kvm_xen_update_runstate(v, RUNSTATE_running, steal_time);
> >  }
> >
> > +int kvm_xen_has_interrupt(struct kvm_vcpu *v)
> > +{
> > +     u8 rc = 0;
> > +
> > +     /*
> > +      * If the global upcall vector (HVMIRQ_callback_vector) is set and
> > +      * the vCPU's evtchn_upcall_pending flag is set, the IRQ is pending.
> > +      */
> > +     if (v->arch.xen.vcpu_info_set && v->kvm->arch.xen.upcall_vector) {
> > +             struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
> > +             struct kvm_memslots *slots = kvm_memslots(v->kvm);
> > +             unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
> > +
> 
> To have less nesting, wouldn't it be better to invert the logic?
> 
> say:
> 
>         u8 rc = 0;
>         struct gfn_to_hva_cache *ghc
>         struct kvm_memslots *slots;
>         unsigned int offset;
> 
> 
>         if (!v->arch.xen.vcpu_info_set || !v->kvm->arch.xen.upcall_vector)
>                 return 0;
> 
>         BUILD_BUG_ON(...)
> 
>         ghc = &v->arch.xen.vcpu_info_cache;
>         slots = kvm_memslots(v->kvm);
>         offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
> 
> But I think there's a flaw here. That is handling the case where you don't have a
> vcpu_info registered, and only shared info. The vcpu_info is then placed elsewhere, i.e.
> another offset out of shared_info -- which is *I think* the case for PVHVM Windows guests.

Only been true for Windows for about a week ;-)

  Paul

> 
> Perhaps introducing a helper which adds xen_vcpu_info() and returns you the hva (picking
> the right cache) similar to the RFC patch. Albeit that was with page pinning, but
> borrowing an older version I had with hva_to_gfn_cache incantation would probably look like:
> 
> 
>         if (v->arch.xen.vcpu_info_set) {
>                 ghc = &v->arch.xen.vcpu_info_cache;
>         } else {
>                 ghc = &v->arch.xen.vcpu_info_cache;
>                 offset += offsetof(struct shared_info, vcpu_info);
>                 offset += (v - kvm_get_vcpu_by_id(0)) * sizeof(struct vcpu_info);
>         }
> 
>         if (likely(slots->generation == ghc->generation &&
>                    !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
>                 /* Fast path */
>                 __get_user(rc, (u8 __user *)ghc->hva + offset);
>         } else {
>                 /* Slow path */
>                 kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
>                                              sizeof(rc));
>         }
> 
>  ?
> 
>         Joao




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux