On Wed, Oct 16, 2019 at 06:58:42PM +0100, Mark Rutland wrote: > I've just done the core (non-arm64) bits today, and pushed that out: > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace-with-regs > > ... I'll fold the remainging bits of patches 4 and 5 together tomorrow > atop of that. I've just force-pushed an updated version with the actual arm64 FTRACE_WITH_REGS bits. There are a couple of bits I still need to verify, but I'm hoping that I can send this out for real next week. In the process of reworking this I spotted some issues that will get in the way of livepatching. Notably: * When modules can be loaded far away from the kernel, we'll potentially need a PLT for each function within a module, if each can be patched to a unique function. Currently we have a fixed number, which is only sufficient for the two ftrace entry trampolines. IIUC, the new code being patched in is itself a module, in which case we'd need a PLT for each function in the main kernel image. We have a few options here, e.g. changing which memory size model we use, or reserving space for a PLT before each function using -f patchable-function-entry=N,M. * There are windows where backtracing will miss the callsite's caller, as its address is not live in the LR or existing chain of frame records. Thus we cannot claim to have a reliable stacktrace. I suspect we'll have to teach the stacktrace code to handle this as a special-case. I'll try to write these up, as similar probably applies to other architectures with a link register. Thanks, Mark.