Hi all, This is v5 of the nVHE hypervisor stack enhancements. The main changes in this version are: - Align private allocations on based the order of their size - Allocate single private VA range for both stack and guard page (Don't rely on allocator providing separate allocations that are contiguous) - Rebase series on 5.17-rc7 Previous versions can be found at: v4: https://lore.kernel.org/r/20220225033548.1912117-1-kaleshsingh@xxxxxxxxxx/ v3: https://lore.kernel.org/r/20220224051439.640768-1-kaleshsingh@xxxxxxxxxx/ v2: https://lore.kernel.org/r/20220222165212.2005066-1-kaleshsingh@xxxxxxxxxx/ v1: https://lore.kernel.org/r/20220210224220.4076151-1-kaleshsingh@xxxxxxxxxx/ The previous cover letter has been copied below for convenience. Thanks, Kalesh ----- This series is based on 5.17-rc7 and adds the following stack features to the KVM nVHE hypervisor: == Hyp Stack Guard Pages == Based on the technique used by arm64 VMAP_STACK to detect overflow. i.e. the stack is aligned such that the 'stack shift' bit of any valid SP is 1. The 'stack shift' bit can be tested in the exception entry to detect overflow without corrupting GPRs. == Hyp Stack Unwinder == Based on the arm64 kernel stack unwinder (See: arch/arm64/kernel/stacktrace.c) The unwinding and dumping of the hyp stack is not enabled by default and depends on CONFIG_NVHE_EL2_DEBUG to avoid potential information leaks. When CONFIG_NVHE_EL2_DEBUG is enabled the host stage 2 protection is disabled, allowing the host to read the hypervisor stack pages and unwind the stack from EL1. This allows us to print the hypervisor stacktrace before panicking the host; as shown below. Example call trace: [ 98.916444][ T426] kvm [426]: nVHE hyp panic at: [<ffffffc0096156fc>] __kvm_nvhe_overflow_stack+0x8/0x34! [ 98.918360][ T426] nVHE HYP call trace: [ 98.918692][ T426] kvm [426]: [<ffffffc009615aac>] __kvm_nvhe_cpu_prepare_nvhe_panic_info+0x4c/0x68 [ 98.919545][ T426] kvm [426]: [<ffffffc0096159a4>] __kvm_nvhe_hyp_panic+0x2c/0xe8 [ 98.920107][ T426] kvm [426]: [<ffffffc009615ad8>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10 [ 98.920665][ T426] kvm [426]: [<ffffffc009610a4c>] __kvm_nvhe___kvm_hyp_host_vector+0x24c/0x794 [ 98.921292][ T426] kvm [426]: [<ffffffc009615718>] __kvm_nvhe_overflow_stack+0x24/0x34 . . . [ 98.973382][ T426] kvm [426]: [<ffffffc009615718>] __kvm_nvhe_overflow_stack+0x24/0x34 [ 98.973816][ T426] kvm [426]: [<ffffffc0096152f4>] __kvm_nvhe___kvm_vcpu_run+0x38/0x438 [ 98.974255][ T426] kvm [426]: [<ffffffc009616f80>] __kvm_nvhe_handle___kvm_vcpu_run+0x1c4/0x364 [ 98.974719][ T426] kvm [426]: [<ffffffc009616928>] __kvm_nvhe_handle_trap+0xa8/0x130 [ 98.975152][ T426] kvm [426]: [<ffffffc009610064>] __kvm_nvhe___host_exit+0x64/0x64 [ 98.975588][ T426] ---- end of nVHE HYP call trace ---- Kalesh Singh (8): KVM: arm64: Introduce hyp_alloc_private_va_range() KVM: arm64: Introduce pkvm_alloc_private_va_range() KVM: arm64: Add guard pages for KVM nVHE hypervisor stack KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack KVM: arm64: Detect and handle hypervisor stack overflows KVM: arm64: Add hypervisor overflow stack KVM: arm64: Unwind and dump nVHE HYP stacktrace KVM: arm64: Symbolize the nVHE HYP backtrace arch/arm64/include/asm/kvm_asm.h | 21 +++ arch/arm64/include/asm/kvm_mmu.h | 4 + arch/arm64/include/asm/stacktrace.h | 12 ++ arch/arm64/kernel/stacktrace.c | 210 ++++++++++++++++++++++++--- arch/arm64/kvm/Kconfig | 5 +- arch/arm64/kvm/arm.c | 42 +++++- arch/arm64/kvm/handle_exit.c | 16 +- arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/host.S | 29 ++++ arch/arm64/kvm/hyp/nvhe/mm.c | 56 ++++--- arch/arm64/kvm/hyp/nvhe/setup.c | 31 +++- arch/arm64/kvm/hyp/nvhe/switch.c | 30 +++- arch/arm64/kvm/mmu.c | 67 ++++++--- scripts/kallsyms.c | 2 +- 14 files changed, 440 insertions(+), 86 deletions(-) base-commit: ffb217a13a2eaf6d5bd974fc83036a53ca69f1e2 -- 2.35.1.616.g0bdcbb4464-goog _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm