On Thu, 1 Feb 2024 at 13:52, Puranjay Mohan <puranjay12@xxxxxxxxx> wrote: > > Changes in V2->V3: > V2: https://lore.kernel.org/all/20230917000045.56377-1-puranjay12@xxxxxxxxx/ > - Use unwinder from stacktrace.c rather than open coding the unwind logic. > - Fix a bug in the prologue related to BPF_FP (Xu Kuohai) > > Changes in V1->V2: > V1: https://lore.kernel.org/all/20230912233942.6734-1-puranjay12@xxxxxxxxx/ > - Remove exceptions from DENYLIST.aarch64 as they are supported now. > > The base support for exceptions was merged with [1] and it was enabled for > x86-64. > > This patch set enables the support on ARM64, all sefltests are passing: > > # ./test_progs -a exceptions > #74/1 exceptions/exception_throw_always_1:OK > [...] I think this looks ok, would be nice if it received acks from arm64 experts. If you have cycles to spare, please also look into https://lore.kernel.org/bpf/20240201042109.1150490-1-memxor@xxxxxxxxx and let me know how architecture independent the cleanup code right now in the x86 JIT should be made so that we can do the same for arm64 as well later. That would be required to complete the support for cleanups. I guess we just need bpf_frame_spilled_caller_reg_off to be arch specific and lift the rest out into the BPF core. I will make that change in v2 in any case. Just a note but based on our off-list discussion for supporting this stuff on riscv as well (where a lot of registers would have to be saved on entry), the hidden subprog trampoline could be a way to avoid that. They can be pushed by this subprog before entry into bpf_throw, and since the BPF program does not touch the extra callee-saved registers, they should be what the kernel had originally upon entry into the BPF program itself. The same could be done on arm64 and x86, but the returns would be diminishing. It would be nice to quantify how much this saves in terms of costs of pushing extra registers, before doing this.