On Mon, 2018-09-10 at 08:15 -0500, Brijesh Singh wrote: > > On 9/10/18 7:27 AM, Borislav Petkov wrote: > > > > On Fri, Sep 07, 2018 at 12:57:30PM -0500, Brijesh Singh wrote: > > > > > > diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c > > > index 376fd3a..6086b56 100644 > > > --- a/arch/x86/kernel/kvmclock.c > > > +++ b/arch/x86/kernel/kvmclock.c > > > @@ -65,6 +65,15 @@ static struct pvclock_vsyscall_time_info > > > static struct pvclock_wall_clock wall_clock __decrypted; > > > static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu); > > > > > > +#ifdef CONFIG_AMD_MEM_ENCRYPT > > > +/* > > > + * The auxiliary array will be used when SEV is active. In non-SEV case, > > > + * it will be freed by free_decrypted_mem(). > > > + */ > > > +static struct pvclock_vsyscall_time_info > > > + hv_clock_aux[NR_CPUS] __decrypted_aux; > > Hmm, so worst case that's 64 4K pages: > > > > (8192*32)/4096 = 64 4K pages. > We can minimize the worst case memory usage. The number of VCPUs > supported by KVM maybe less than NR_CPUS. e.g Currently KVM_MAX_VCPUS is > set to 288 KVM_MAX_VCPUS is a property of the host, whereas this code runs in the guest, e.g. KVM_MAX_VCPUS could be 2048 in the host for all we know. > (288 * 64)/4096 = 4 4K pages. > > (pvclock_vsyscall_time_info is cache aligned so it will be 64 bytes) Ah, I was wondering why my calculations were always different than yours. I was looking at struct pvclock_vcpu_time_info, which is 32 bytes. > #if NR_CPUS > KVM_MAX_VCPUS > #define HV_AUX_ARRAY_SIZE KVM_MAX_VCPUS > #else > #define HV_AUX_ARRAY_SIZE NR_CPUS > #endif > > static struct pvclock_vsyscall_time_info > hv_clock_aux[HV_AUX_ARRAY_SIZE] __decrypted_aux;