On Thu 2016-02-25 23:11:45, Balbir Singh wrote: > This applies on top of the patches posted by Michael today > Enable livepatching. This takes patch 6/8 and 7/8 of v8 as the base. > Removes the extra strict check in gcc-profile-kernel-notrace.sh > and adds logic for checking offsets in livepatch. The patch > for HAVE_C_RECORDMCOUNT is not required and not used here. > > Depending on whether or not a TOC is generated, the offset > for _mcount can be +16 or +8. The changes are such that the > offset checks are specific to powerpc. > > Comments? Testing? I tested the sample in the livepatch > directory > > --- /dev/null > +++ b/arch/powerpc/include/asm/livepatch.h > +#define ARCH_HAVE_KLP_MATCHADDR > +static inline int klp_matchaddr(struct ftrace_ops *ops, unsigned long ip, > + int remove, int reset) > +{ > + int offsets[] = {8, 16}; > + int i; > + int ret = 1; > + > + for (i = 0; i < ARRAY_SIZE(offsets); i++) { > + ret = ftrace_set_filter_ip(ops, ip+offsets[i], remove, reset); The serach for the right address might get replaced by ftrace_location call. But instead of blindly trying to get the right address. I would suggest to implement a function that will search the ftrace location for the given function address. I means something like: /** * function_to_ftrace_location - get ftrace location for the given * function address. * @addr: function address * * Returns the address of the ftrace location for the given function. * Returns 0 if the address does not correspond to any function * or if the function can not be traced. */ unsigned long function_to_ftrace_location(unsigned long addr) { const struct dyn_ftrace *rec; const struct ftrace_page *pg; unsigned long symbol_size, offset, post_addr; unsigned long ret = 0UL; if (!kallsyms_lookup_size_offset(addr, &symbol_size, &offset)) return 0UL; addr -= offset; post_addr = addr += symbol_size; mutex_lock(&ftrace_lock); do_for_each_ftrace_rec(pg, rec) { if (rec->ip >= addr && rec->ip < post_addr ) { ret = rec->ip; goto end; } } while_for_each_ftrace_rec() end: mutex_unlock(&ftrace_lock); return ret; } The function is just compile tested on x86_64. We had similar function in the SUSE-specific LivePatch implementation before we realized that only fentry (zero offset) could be reasonable supported on x86_64. I will prepare a proper patch/patches for PPC early next week. I have attended a training this week and am snowed under mails a bit. Best Regards, Petr -- To unsubscribe from this list: send the line "unsubscribe live-patching" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html