On Sat, Sep 16, 2017 at 07:34:17AM -0500, Brijesh Singh wrote: > When SEV is active, guest memory is encrypted with guest-specific key, a with a guest-specific key > guest memory region shared with hypervisor must be mapped as unencrypted with the hypervisor must be mapped as decrypted > before we share it. before we can share it. > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: Ingo Molnar <mingo@xxxxxxxxxx> > Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> > Cc: Borislav Petkov <bp@xxxxxxx> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: "Radim Krčmář" <rkrcmar@xxxxxxxxxx> > Cc: Tom Lendacky <thomas.lendacky@xxxxxxx> > Cc: x86@xxxxxxxxxx > Cc: linux-kernel@xxxxxxxxxxxxxxx > Cc: kvm@xxxxxxxxxxxxxxx > Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx> > --- > arch/x86/kernel/kvm.c | 46 +++++++++++++++++++++++++++++++++++++++++++--- > 1 file changed, 43 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 874827b0d7ca..9ccb48b027e4 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -75,8 +75,8 @@ static int parse_no_kvmclock_vsyscall(char *arg) > > early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall); > > -static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); > -static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); > +static DEFINE_PER_CPU_UNENCRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64); > +static DEFINE_PER_CPU_UNENCRYPTED(struct kvm_steal_time, steal_time) __aligned(64); > static int has_steal_clock = 0; > > /* > @@ -305,7 +305,7 @@ static void kvm_register_steal_time(void) > cpu, (unsigned long long) slow_virt_to_phys(st)); > } > > -static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED; > +static DEFINE_PER_CPU_UNENCRYPTED(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED; > > static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val) > { > @@ -419,9 +419,46 @@ void kvm_disable_steal_time(void) > wrmsr(MSR_KVM_STEAL_TIME, 0, 0); > } > > +static inline void __init __set_percpu_var_unencrypted( > + void *var, int size) Yuck, line ending with opening brace. Do the obvious thing: static inline void __init __set_percpu_decrypted(void *var, int size) > +{ > + unsigned long pa = slow_virt_to_phys(var); > + > + /* decrypt the memory in-place */ > + sme_early_decrypt(pa, size); > + > + /* clear the C-bit from the page table */ > + early_set_memory_decrypted(pa, size); So those two do a lot of work like TLB flushing and WBINVD for each per-CPU variable and normally I'd say you do this on one go instead of variable per variable and thus save yourself the subsequent expensive invalidation calls but we do it once only during boot so maybe something to think about later, when there's more time and boredom. :) > +} > + > +/* > + * Iterate through all possible CPUs and map the memory region pointed > + * by apf_reason, steal_time and kvm_apic_eoi as unencrypted at once. s/unencrypted/decrypted/g > + * > + * Note: we iterate through all possible CPUs to ensure that CPUs > + * hotplugged will have their per-cpu variable already mapped as > + * unencrypted. > + */ > +static void __init set_percpu_unencrypted(void) > +{ > + int cpu; > + > + for_each_possible_cpu(cpu) { > + __set_percpu_var_unencrypted(&per_cpu(apf_reason, cpu), > + sizeof(struct kvm_vcpu_pv_apf_data)); > + __set_percpu_var_unencrypted(&per_cpu(steal_time, cpu), > + sizeof(struct kvm_steal_time)); > + __set_percpu_var_unencrypted(&per_cpu(kvm_apic_eoi, cpu), > + sizeof(unsigned long)); > + } Let it stick out and shorten function name: for_each_possible_cpu(cpu) { __set_percpu_decrypted(&per_cpu(apf_reason, cpu), sizeof(struct kvm_vcpu_pv_apf_data)); __set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(struct kvm_steal_time)); __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(unsigned long)); } Also, we agreed to call everything that's not encrypted "decrypted" so that we have only two different states: encrypted and decrypted and thus less confusion. > +} > + > #ifdef CONFIG_SMP > static void __init kvm_smp_prepare_boot_cpu(void) > { > + if (sev_active()) > + set_percpu_unencrypted(); > + Move that sev_active() check into the function and call the latter sev_map_percpu_data(). -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) --