On Mon, Jul 18, 2022 at 12:13 AM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > On Fri, 15 Jul 2022 07:10:18 +0100, > Kalesh Singh <kaleshsingh@xxxxxxxxxx> wrote: > > > > In protected nVHE mode the host cannot directly access > > hypervisor memory, so we will dump the hypervisor stacktrace > > to a shared buffer with the host. > > > > The minimum size do the buffer required, assuming the min frame > > s/do/for/ ? Ack > > > size of [x29, x30] (2 * sizeof(long)), is half the combined size of > > the hypervisor and overflow stacks plus an additional entry to > > delimit the end of the stacktrace. > > Let me see if I understand this: the maximum stack size is the > combination of the HYP and overflow stacks, and the smallest possible > stack frame is 128bit (only FP+LR). The buffer thus needs to provide > one 64bit entry per stack frame that fits in the combined stack, plus > one entry as an end marker. > > So the resulting size is half of the combined stack size, plus a > single 64bit word. Is this correct? That understanding is correct. So for the 64 KB pages is slightly more than half a page (overflow stack is 4KB). > > > > > The stacktrace buffers are used later in the seried to dump the > > nVHE hypervisor stacktrace when using protected-mode. > > > > Signed-off-by: Kalesh Singh <kaleshsingh@xxxxxxxxxx> > > --- > > arch/arm64/include/asm/memory.h | 7 +++++++ > > arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++ > > 2 files changed, 11 insertions(+) > > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > > index 0af70d9abede..28a4893d4b84 100644 > > --- a/arch/arm64/include/asm/memory.h > > +++ b/arch/arm64/include/asm/memory.h > > @@ -113,6 +113,13 @@ > > > > #define OVERFLOW_STACK_SIZE SZ_4K > > > > +/* > > + * With the minimum frame size of [x29, x30], exactly half the combined > > + * sizes of the hyp and overflow stacks is needed to save the unwinded > > + * stacktrace; plus an additional entry to delimit the end. > > + */ > > +#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long)) > > + > > /* > > * Alignment of kernel segments (e.g. .text, .data). > > * > > diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c > > index a3d5b34e1249..69e65b457f1c 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c > > +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c > > @@ -9,3 +9,7 @@ > > > > DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack) > > __aligned(16); > > + > > +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE > > +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); > > +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ > > OK, so the allocation exists even if KVM is not running in protected > mode. I guess this is OK for now, but definitely reinforces my request > that this is only there when compiled for debug mode. > Yes, but in the case you aren't running protected mode you can avoid it by setting PROTECTED_NVHE_STACKTRACE=n. Thanks, Kalesh > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm