On Wednesday 03 Feb 2021 at 16:11:47 (+0000), Will Deacon wrote: > On Fri, Jan 08, 2021 at 12:15:24PM +0000, Quentin Perret wrote: > > When KVM runs in protected nVHE mode, make use of a stage 2 page-table > > to give the hypervisor some control over the host memory accesses. At > > the moment all memory aborts from the host will be instantly idmapped > > RWX at stage 2 in a lazy fashion. Later patches will make use of that > > infrastructure to implement access control restrictions to e.g. protect > > guest memory from the host. > > > > Signed-off-by: Quentin Perret <qperret@xxxxxxxxxx> > > --- > > arch/arm64/include/asm/kvm_cpufeature.h | 2 + > > arch/arm64/kernel/image-vars.h | 3 + > > arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 33 +++ > > arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- > > arch/arm64/kvm/hyp/nvhe/hyp-init.S | 1 + > > arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 + > > arch/arm64/kvm/hyp/nvhe/mem_protect.c | 191 ++++++++++++++++++ > > arch/arm64/kvm/hyp/nvhe/setup.c | 6 + > > arch/arm64/kvm/hyp/nvhe/switch.c | 7 +- > > arch/arm64/kvm/hyp/nvhe/tlb.c | 4 +- > > 10 files changed, 248 insertions(+), 7 deletions(-) > > create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > > create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c > > [...] > > > +void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) > > +{ > > + enum kvm_pgtable_prot prot; > > + u64 far, hpfar, esr, ipa; > > + int ret; > > + > > + esr = read_sysreg_el2(SYS_ESR); > > + if (!__get_fault_info(esr, &far, &hpfar)) > > + hyp_panic(); > > + > > + prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W | KVM_PGTABLE_PROT_X; > > + ipa = (hpfar & HPFAR_MASK) << 8; > > + ret = host_stage2_map(ipa, PAGE_SIZE, prot); > > Can we try to put down a block mapping if the whole thing falls within > memory? Yes we can! And in fact we can do that outside of memory too. It's queued for v3 already, so stay tuned ... :) Thanks, Quentin