Hi Kalesh, On Wed, Jul 20, 2022 at 10:57:27PM -0700, Kalesh Singh wrote: [...] > +/* > + * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace. > + * > + * @hyp_offset: hypervisor offset, used for address translation. > + * > + * Dumping of the pKVM HYP backtrace is done by reading the > + * stack addresses from the shared stacktrace buffer, since the > + * host cannot direclty access hyperviosr memory in protected > + * mode. > + */ > +static void pkvm_dump_backtrace(unsigned long hyp_offset) > +{ > + unsigned long *stacktrace_entry > + = (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace); > + unsigned long va_mask, pc; > + > + va_mask = GENMASK_ULL(vabits_actual - 1, 0); > + > + kvm_err("Protected nVHE HYP call trace:\n"); This and the footer printks should be put in respective helpers to share between pKVM and non-pKVM backtrace implementations. I imagine users will invariably bake some pattern matching to scrape traces, and it should be consistent between both flavors. > + /* The stack trace is terminated by a null entry */ > + for (; *stacktrace_entry; stacktrace_entry++) { At the point we're dumping the backtrace we know that EL2 has already soiled itself, so we shouldn't explicitly depend on it providing NULL terminators. I believe this loop should have an explicit range && NULL check. -- Thanks, Oliver _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm