On Wed, Jan 09, 2019 at 01:54:34PM +0000, Marc Zyngier wrote: > When running VHE, there is no need to jump via some stub to perform > a "HYP" function call, as there is a single address space. > > Let's thus change kvm_call_hyp() and co to perform a direct call > in this case. Although this results in a bit of code expansion, > it allows the compiler to check for type compatibility, something > that we are missing so far. > > Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx> > --- > arch/arm64/include/asm/kvm_host.h | 32 +++++++++++++++++++++++++++++-- > 1 file changed, 30 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index e54cb7c88a4e..df32edbadd69 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -370,8 +370,36 @@ void kvm_arm_halt_guest(struct kvm *kvm); > void kvm_arm_resume_guest(struct kvm *kvm); > > u64 __kvm_call_hyp(void *hypfn, ...); > -#define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__) > -#define kvm_call_hyp_ret(f, ...) kvm_call_hyp(f, ##__VA_ARGS__) > + > +/* > + * The couple of isb() below are there to guarantee the same behaviour > + * on VHE as on !VHE, where the eret to EL1 acts as a context > + * synchronization event. > + */ > +#define kvm_call_hyp(f, ...) \ > + do { \ > + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN)) { \ > + f(__VA_ARGS__); \ > + isb(); \ > + } else { \ > + __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__); \ > + } \ > + } while(0) > + > +#define kvm_call_hyp_ret(f, ...) \ > + ({ \ > + u64 ret; \ > + \ > + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN)) { \ > + ret = f(__VA_ARGS__); \ __kvm_get_mdcr_el2 and __kvm_vcpu_run_nvhe don't return u64 type, they return a smaller type. I guess any issues would be picked up when compiling, but should the name of the macro be clearer as to the assumptions it makes? Or perhaps take an argument which is the type of ret? Andrew Murray > + isb(); \ > + } else { \ > + ret = __kvm_call_hyp(kvm_ksym_ref(f), \ > + ##__VA_ARGS__); \ > + } \ > + \ > + ret; \ > + }) > > void force_vm_exit(const cpumask_t *mask); > void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); > -- > 2.20.1 > > _______________________________________________ > kvmarm mailing list > kvmarm@xxxxxxxxxxxxxxxxxxxxx > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm