On Fri, Feb 05, 2021, Chenyi Qiang wrote: > In addition to the pkey check for user pages, advertise pkr_mask also to > cache the conditions where protection key checks for supervisor pages > are needed. Add CR4_PKS in mmu_role_bits to track the pkr_mask update on > a per-mmu basis. > > In original cache conditions of pkr_mask, U/S bit in page tables is a > judgement condition and replace the PFEC.RSVD in page fault error code > to form the index of 16 domains. PKS support would extend the U/S bits > (if U/S=0, PKS check required). It adds an additional check for > cr4_pke/cr4_pks to ensure the necessity and distinguish PKU and PKS from > each other. > > Signed-off-by: Chenyi Qiang <chenyi.qiang@xxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 11 +++--- > arch/x86/kvm/mmu.h | 13 ++++--- > arch/x86/kvm/mmu/mmu.c | 63 +++++++++++++++++++-------------- > arch/x86/kvm/x86.c | 3 +- > 4 files changed, 53 insertions(+), 37 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 1909d34cbac8..e515f1cecb88 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -294,7 +294,7 @@ union kvm_mmu_extended_role { > unsigned int cr0_pg:1; > unsigned int cr4_pae:1; > unsigned int cr4_pse:1; > - unsigned int cr4_pke:1; > + unsigned int cr4_pkr:1; Smushing these together will not work, as this code (from below) > - ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); > + ext.cr4_pkr = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || > + !!kvm_read_cr4_bits(vcpu, X86_CR4_PKS); will generate the same mmu_role for CR4.PKE=0,PKS=1 and CR4.PKE=1,PKS=1 (and other combinations). I.e. KVM will fail to reconfigure the MMU and thus skip update_pkr_bitmask() if the guest toggles PKE or PKS while the other PK* bit is set. > unsigned int cr4_smap:1; > unsigned int cr4_smep:1; > unsigned int maxphyaddr:6;