On Wed 2021-12-29 13:56:47, David Vernet wrote: > When initializing a 'struct klp_object' in klp_init_object_loaded(), and > performing relocations in klp_resolve_symbols(), klp_find_object_symbol() > is invoked to look up the address of a symbol in an already-loaded module > (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or > module_kallsyms_on_each_symbol() to find the address of the symbol that is > being patched. > > It turns out that symbol lookups often take up the most CPU time when > enabling and disabling a patch, and may hog the CPU and cause other tasks > on that CPU's runqueue to starve -- even in paths where interrupts are > enabled. For example, under certain workloads, enabling a KLP patch with > many objects or functions may cause ksoftirqd to be starved, and thus for > interrupts to be backlogged and delayed. This may end up causing TCP > retransmits on the host where the KLP patch is being applied, and in > general, may cause any interrupts serviced by softirqd to be delayed while > the patch is being applied. > > So as to ensure that kallsyms_on_each_symbol() does not end up hogging the > CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol() > and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol > lookup in vmlinux and a module respectively. Without this patch, if a > live-patch is applied on a 36-core Intel host with heavy TCP traffic, a > ~10x spike is observed in TCP retransmits while the patch is being applied. > Additionally, collecting sched events with perf indicates that ksoftirqd is > awakened ~1.3 seconds before it's eventually scheduled. With the patch, no > increase in TCP retransmit events is observed, and ksoftirqd is scheduled > shortly after it's awakened. > > Signed-off-by: David Vernet <void@xxxxxxxxxxxxx> OK, there was not any strong pushback. I have committed the patch into livepatch.git, branch for-5.17/kallsyms. Best Regards, Petr