Re: [PATCH bpf-next v3 3/4] LoongArch: Add BPF JIT support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 09/03/2022 04:32 PM, Huacai Chen wrote:
On Thu, Sep 1, 2022 at 10:27 AM Tiezhu Yang <yangtiezhu@xxxxxxxxxxx> wrote:

BPF programs are normally handled by a BPF interpreter, add BPF JIT
support for LoongArch to allow the kernel to generate native code
when a program is loaded into the kernel, this will significantly
speed-up processing of BPF programs.

[...]

+
+static inline int emit_cond_jmp(struct jit_ctx *ctx, u8 cond, enum loongarch_gpr rj,
+                               enum loongarch_gpr rd, int jmp_offset)
+{
+       /*
+        * A large PC-relative jump offset may overflow the immediate field of
+        * the native conditional branch instruction, triggering a conversion
+        * to use an absolute jump instead, this jump sequence is particularly
+        * nasty. For now, use cond_jmp_offs26() directly to keep it simple.
+        * In the future, maybe we can add support for far branching, the branch
+        * relaxation requires more than two passes to converge, the code seems
+        * too complex to understand, not quite sure whether it is necessary and
+        * worth the extra pain. Anyway, just leave it as it is to enhance code
+        * readability now.
Oh, no. I don't think this is a very difficult problem, because the
old version has already solved [1]. Please improve your code and send
V4.
BTW, I have committed V3 with some small modifications in
https://github.com/loongson/linux/commits/loongarch-next, please make
V4 based on that.

[1] https://github.com/loongson/linux/commit/e20b2353f40cd13720996524e1df6d0ca086eeb8#diff-6d2f4f5a862a5dce12f8eb0feeca095825c4ed1c2e7151b0905fb8d03c98922e

--------code in the old version--------
static inline void emit_cond_jump(struct jit_ctx *ctx, u8 cond, enum
loongarch_gpr rj,
                                  enum loongarch_gpr rd, int jmp_offset)
{
        if (is_signed_imm16(jmp_offset))
                cond_jump_offs16(ctx, cond, rj, rd, jmp_offset);
        else if (is_signed_imm26(jmp_offset))
                cond_jump_offs26(ctx, cond, rj, rd, jmp_offset);
        else
                cond_jump_offs32(ctx, cond, rj, rd, jmp_offset);
}

static inline void emit_uncond_jump(struct jit_ctx *ctx, int
jmp_offset, bool is_exit)
{
        if (is_signed_imm26(jmp_offset))
                uncond_jump_offs26(ctx, jmp_offset);
        else
                uncond_jump_offs32(ctx, jmp_offset, is_exit);
}
--------end of code--------

Huacai


Hi Huacai,

This change is to pass the special test cases:
"a new type of jump test where the program jumps forwards
and backwards with increasing offset. It mainly tests JITs where a
relative jump may generate different JITed code depending on the offset
size."

They are introduced in commit a7d2e752e520 ("bpf/tests: Add staggered
JMP and JMP32 tests") after the old internal version you mentioned.

Here, we use the native instructions to enlarge the jump range to 26 bit
directly rather than 16 bit first, and also take no account of more than
26 bit case because there is no native instruction and it needs to emulate.

As the code comment said, this is to enhance code readability now.

Thanks,
Tiezhu




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux