On Thu, 2023-09-14 at 08:49 +0000, Paul Durrant wrote: > --- a/arch/x86/kvm/xen.c > +++ b/arch/x86/kvm/xen.c > @@ -430,14 +430,13 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) > smp_wmb(); > } > > - if (user_len2) > + if (user_len2) { > + kvm_gpc_mark_dirty(gpc2); > read_unlock(&gpc2->lock); > + } > > + kvm_gpc_mark_dirty(gpc1); > read_unlock_irqrestore(&gpc1->lock, flags); > - > - mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT); > - if (user_len2) > - mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT); > } > > void kvm_xen_update_runstate(struct kvm_vcpu *v, int state) ISTR there was a reason why the mark_page_dirty_in_slot() was called *after* unlocking. Although now I say it, that seems wrong... is that because the spinlock is only protecting the uHVA→kHVA mapping, while the memslot/gpa are going to remain valid even after unlock, because those are protected by sRCU?
Attachment:
smime.p7s
Description: S/MIME cryptographic signature