From: Sean Christopherson <seanjc@xxxxxxxxxx> Add flags to "struct kvm_gfn_range" to let notifier events target only shared and only private mappings, and write up the existing mmu_notifier events to be shared-only (private memory is never associated with a userspace virtual address, i.e. can't be reached via mmu_notifiers). Add two flags so that KVM can handle the three possibilities (shared, private, and shared+private) without needing something like a tri-state enum. Link: https://lore.kernel.org/all/ZJX0hk+KpQP0KUyB@xxxxxxxxxx Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Message-Id: <20231027182217.3615211-13-seanjc@xxxxxxxxxx> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3ebc6912c54a..4d5d139b0bde 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -264,6 +264,8 @@ struct kvm_gfn_range { gfn_t start; gfn_t end; union kvm_mmu_notifier_arg arg; + bool only_private; + bool only_shared; bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8758cb799e18..9170a61ea99f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -635,6 +635,13 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, * the second or later invocation of the handler). */ gfn_range.arg = range->arg; + + /* + * HVA-based notifications provide a userspace address, + * and as such are only relevant for shared mappings. + */ + gfn_range.only_private = false; + gfn_range.only_shared = true; gfn_range.may_block = range->may_block; /* @@ -2493,6 +2500,16 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, gfn_range.arg = range->arg; gfn_range.may_block = range->may_block; + /* + * If/when KVM supports more attributes beyond private .vs shared, this + * _could_ set only_{private,shared} appropriately if the entire target + * range already has the desired private vs. shared state (it's unclear + * if that is a net win). For now, KVM reaches this point if and only + * if the private flag is being toggled, i.e. all mappings are in play. + */ + gfn_range.only_private = false; + gfn_range.only_shared = false; + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); -- 2.39.1