On Tue, 2021-08-24 at 15:55 +0800, Lai Jiangshan wrote: > From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx> > > We'd better only unsync the pagetable when there just was a really > write fault on a level-1 pagetable. > > Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 6 +++++- > arch/x86/kvm/mmu/mmu_internal.h | 3 ++- > arch/x86/kvm/mmu/spte.c | 2 +- > 3 files changed, 8 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index a165eb8713bc..e5932af6f11c 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2600,7 +2600,8 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) > * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must > * be write-protected. > */ > -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) > +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, > + bool speculative) > { > struct kvm_mmu_page *sp; > bool locked = false; > @@ -2626,6 +2627,9 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) > if (sp->unsync) > continue; > > + if (speculative) > + return -EEXIST; Woudn't it be better to ensure that callers set can_unsync = false when speculating? Also if I understand correctly this is not fixing a bug, but an optimization? Best regards, Maxim Levitsky > + > /* > * TDP MMU page faults require an additional spinlock as they > * run with mmu_lock held for read, not write, and the unsync > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index 658d8d228d43..f5d8be787993 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -116,7 +116,8 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) > kvm_x86_ops.cpu_dirty_log_size; > } > > -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); > +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, > + bool speculative); > > void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); > void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > index 3e97cdb13eb7..b68a580f3510 100644 > --- a/arch/x86/kvm/mmu/spte.c > +++ b/arch/x86/kvm/mmu/spte.c > @@ -159,7 +159,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, > * e.g. it's write-tracked (upper-level SPs) or has one or more > * shadow pages and unsync'ing pages is not allowed. > */ > - if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { > + if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync, speculative)) { > pgprintk("%s: found shadow page for %llx, marking ro\n", > __func__, gfn); > ret |= SET_SPTE_WRITE_PROTECTED_PT;