On Tue, 23 Nov 2021, Peter Zijlstra wrote: > On Tue, Nov 23, 2021 at 10:58:57AM +0100, Miroslav Benes wrote: > > [ adding more CCs ] > > > > On Mon, 22 Nov 2021, joao@xxxxxxxxxxxxxxxxxx wrote: > > > > > Hi Miroslav, Petr and Nicolai, > > > > > > Long time no talk, I hope you are all still doing great :) > > > > Everything great here :) > > > > > So, we have been cooking a few patches for enabling Intel CET/IBT support in > > > the kernel. The way IBT works is -- whenever you have an indirect branch, the > > > control-flow must land in an endbr instruction. The idea is to constrain > > > control-flow in a way to make it harder for attackers to achieve meaningful > > > computation through pointer/memory corruption (as in, an attacker that can > > > corrupt a function pointer by exploiting a memory corruption bug won't be able > > > to execute whatever piece of code, being restricted to jump into endbr > > > instructions). To make the allowed control-flow graph more restrict, we are > > > looking into how to minimize the number of endbrs in the final kernel binary > > > -- meaning that if a function is never called indirectly, it shouldn't have an > > > endbr instruction, thus increasing the security guarantees of the hardware > > > feature. > > > > > > Some ref about what is going on -- > > > https://lore.kernel.org/lkml/20211122170805.149482391@xxxxxxxxxxxxx/T/ > > > > Yes, I noticed something was happening again. There was a thread on this > > in February https://lore.kernel.org/all/20210207104022.GA32127@xxxxxxx/ > > and some concerns were raised back then around fentry and int3 patching if > > I remember correctly. Is this still an issue? > > The problem was bpf, and probably still is. I've not come around to > looking at it. Asusming fentry is at +0 is silly (although I would like > to see that restored for other reasons), but bpf will also need to emit > ENDBR at least at the start of every JIT'ed program, because entry into > them is through an indirect branch. > > If nobody beats me to it, I'll get around to it eventually. Ok. And we would need something like the following for the livepatch (not even compile tested). --- diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h index 4fe018cc207b..7b9dcd51af32 100644 --- a/arch/powerpc/include/asm/livepatch.h +++ b/arch/powerpc/include/asm/livepatch.h @@ -19,16 +19,6 @@ static inline void klp_arch_set_pc(struct ftrace_regs *fregs, unsigned long ip) regs_set_return_ip(regs, ip); } -#define klp_get_ftrace_location klp_get_ftrace_location -static inline unsigned long klp_get_ftrace_location(unsigned long faddr) -{ - /* - * Live patch works only with -mprofile-kernel on PPC. In this case, - * the ftrace location is always within the first 16 bytes. - */ - return ftrace_location_range(faddr, faddr + 16); -} - static inline void klp_init_thread_info(struct task_struct *p) { /* + 1 to account for STACK_END_MAGIC */ diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c index fe316c021d73..81cd9235e160 100644 --- a/kernel/livepatch/patch.c +++ b/kernel/livepatch/patch.c @@ -127,15 +127,18 @@ static void notrace klp_ftrace_handler(unsigned long ip, /* * Convert a function address into the appropriate ftrace location. * - * Usually this is just the address of the function, but on some architectures - * it's more complicated so allow them to provide a custom behaviour. + * Usually this is just the address of the function, but there are some + * exceptions. + * + * * PPC - live patch works only with -mprofile-kernel. In this case, + * the ftrace location is always within the first 16 bytes. + * * x86_64 with CET/IBT enabled - there is ENDBR instruction at +0 offset. + * __fentry__ follows it. */ -#ifndef klp_get_ftrace_location -static unsigned long klp_get_ftrace_location(unsigned long faddr) +static inline unsigned long klp_get_ftrace_location(unsigned long faddr) { - return faddr; + return ftrace_location_range(faddr, faddr + 16); } -#endif static void klp_unpatch_func(struct klp_func *func) {