On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote: > Extended GVA ranges support bit seems to indicate whether lower 12 > bits of GVA can be used to specify up to 4095 additional consequent > GVAs to flush. This is somewhat described in TLFS. > > Previously, KVM was handling HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} > requests by flushing the whole VPID so technically, extended GVA > ranges were already supported. As such requests are handled more > gently now, advertizing support for extended ranges starts making > sense to reduce the size of TLB flush requests. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> > --- > arch/x86/include/asm/hyperv-tlfs.h | 2 ++ > arch/x86/kvm/hyperv.c | 1 + > 2 files changed, 3 insertions(+) > > diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h > index 0a9407dc0859..5225a85c08c3 100644 > --- a/arch/x86/include/asm/hyperv-tlfs.h > +++ b/arch/x86/include/asm/hyperv-tlfs.h > @@ -61,6 +61,8 @@ > #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(10) > /* Support for debug MSRs available */ > #define HV_FEATURE_DEBUG_MSRS_AVAILABLE BIT(11) > +/* Support for extended gva ranges for flush hypercalls available */ > +#define HV_FEATURE_EXT_GVA_RANGES_FLUSH BIT(14) > /* > * Support for returning hypercall output block via XMM > * registers is available > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > index 759e1a16e5c3..1a6f9628cee9 100644 > --- a/arch/x86/kvm/hyperv.c > +++ b/arch/x86/kvm/hyperv.c > @@ -2702,6 +2702,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, > ent->ebx |= HV_DEBUGGING; > ent->edx |= HV_X64_GUEST_DEBUGGING_AVAILABLE; > ent->edx |= HV_FEATURE_DEBUG_MSRS_AVAILABLE; > + ent->edx |= HV_FEATURE_EXT_GVA_RANGES_FLUSH; > > /* > * Direct Synthetic timers only make sense with in-kernel I do think that we need to ask Microsoft to document this, since from the spec (v6.0b) the only mention of this is "Bit 14: ExtendedGvaRangesForFlushVirtualAddressListAvailable" Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> Best regards, Maxim Levitsky