On Thu, Sep 06, 2018 at 02:24:23PM +0200, Borislav Petkov wrote: > On Thu, Sep 06, 2018 at 06:43:02AM -0500, Brijesh Singh wrote: > > Currently, the per-cpu pvclock data is allocated dynamically when > > cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is > > shared between the guest and the hypervisor hence it must be mapped as > > unencrypted (ie. C=0) when SEV is active. > > > > When SEV is active, we will be wasting fairly sizeable amount of memory > > since each CPU will be doing a separate 4k allocation so that it can clear > > C-bit. Let's define few extra static page sized array of pvclock data. > > In the preparatory stage of CPU hotplug, use the element of this static > > array to avoid the dynamic allocation. This array will be put in > > the .data..decrypted section so that its mapped with C=0 during the boot. > > > > In non-SEV case, this static page will unused and free'd by the > > free_decrypted_mem(). > > > > Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx> > > Suggested-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > > Cc: Tom Lendacky <thomas.lendacky@xxxxxxx> > > Cc: kvm@xxxxxxxxxxxxxxx > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > > Cc: Borislav Petkov <bp@xxxxxxx> > > Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> > > Cc: linux-kernel@xxxxxxxxxxxxxxx > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > Cc: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > > Cc: kvm@xxxxxxxxxxxxxxx > > Cc: "Radim Krčmář" <rkrcmar@xxxxxxxxxx> > > --- > > arch/x86/include/asm/mem_encrypt.h | 4 ++++ > > arch/x86/kernel/kvmclock.c | 22 +++++++++++++++++++--- > > arch/x86/kernel/vmlinux.lds.S | 3 +++ > > arch/x86/mm/init.c | 3 +++ > > arch/x86/mm/mem_encrypt.c | 10 ++++++++++ > > 5 files changed, 39 insertions(+), 3 deletions(-) > > > > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h > > index 802b2eb..aa204af 100644 > > --- a/arch/x86/include/asm/mem_encrypt.h > > +++ b/arch/x86/include/asm/mem_encrypt.h > > @@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size); > > > > /* Architecture __weak replacement functions */ > > void __init mem_encrypt_init(void); > > +void __init free_decrypted_mem(void); > > > > bool sme_active(void); > > bool sev_active(void); > > > > #define __decrypted __attribute__((__section__(".data..decrypted"))) > > +#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock"))) > > So are we going to be defining a decrypted section for every piece of > machinery now? > > That's a bit too much in my book. > > Why can't you simply free everything in .data..decrypted on !SVE guests? That would prevent adding __decrypted to existing declarations, e.g. hv_clock_boot, which would be ugly in its own right. A more generic solution would be to add something like __decrypted_exclusive to mark data that is used if and only if SEV is active, and then free the SEV-only data when SEV is disabled. Originally, my thought was that this would be a one-off case and the array could be freed directly in kvmclock_init(), e.g.: static struct pvclock_vsyscall_time_info hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE); ... void __init kvmclock_init(void) { u8 flags; if (!sev_active()) free_init_pages("unused decrypted", (unsigned long)hv_clock_aux, (unsigned long)hv_clock_aux + sizeof(hv_clock_aux)); > > -- > Regards/Gruss, > Boris. > > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) > --