On 1/19/22 3:49 PM, Hou Tao wrote:
Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
randomization range to 2 GB"), for arm64 whether KASLR is enabled
or not, the module is placed within 2GB of the kernel region, so
s32 in bpf_kfunc_desc is sufficient to represente the offset of
module function relative to __bpf_call_base. The only thing needed
is to override bpf_jit_supports_kfunc_call().
Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx>
Lgtm, could we also add a BPF selftest to assert that this assumption
won't break in future when bpf_jit_supports_kfunc_call() returns true?
E.g. extending lib/test_bpf.ko could be an option, wdyt?
---
arch/arm64/net/bpf_jit_comp.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index e96d4d87291f..74f9a9b6a053 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
return prog;
}
+bool bpf_jit_supports_kfunc_call(void)
+{
+ return true;
+}
+
u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;