On 15/01/19 05:28, Kai Huang wrote: > AMD's SME/SEV is no longer the only case which reduces supported > physical address bits, since Intel introduced Multi-key Total Memory > Encryption (MKTME), which repurposes high bits of physical address as > keyID, thus effectively shrinks supported physical address bits. To > cover both cases (and potential similar future features), kernel MM > introduced generic dynamaic physical address mask instead of hard-coded > __PHYSICAL_MASK in 'commit 94d49eb30e854 ("x86/mm: Decouple dynamic > __PHYSICAL_MASK from AMD SME")'. KVM should use that too. > > Change PT64_BASE_ADDR_MASK to use kernel dynamic physical address mask > when it is enabled, instead of sme_clr. PT64_DIR_BASE_ADDR_MASK is also > deleted since it is not used at all. > > Signed-off-by: Kai Huang <kai.huang@xxxxxxxxxxxxxxx> > --- > arch/x86/kvm/mmu.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index ce770b446238..1f81cc1f35b2 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -109,9 +109,11 @@ module_param(dbg, bool, 0644); > (((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1)) > > > -#define PT64_BASE_ADDR_MASK __sme_clr((((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))) > -#define PT64_DIR_BASE_ADDR_MASK \ > - (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + PT64_LEVEL_BITS)) - 1)) > +#ifdef CONFIG_DYNAMIC_PHYSICAL_MASK > +#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) > +#else > +#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) > +#endif > #define PT64_LVL_ADDR_MASK(level) \ > (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ > * PT64_LEVEL_BITS))) - 1)) > Queued, thanks. Paolo