Re: [PATCH bpf-next v2 04/15] bpf: Support new unconditional bswap instruction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/16/23 6:42 PM, Eduard Zingerman wrote:
On Wed, 2023-07-12 at 23:07 -0700, Yonghong Song wrote:
The existing 'be' and 'le' insns will do conditional bswap
depends on host endianness. This patch implements
unconditional bswap insns.

Signed-off-by: Yonghong Song <yhs@xxxxxx>

Note sure if this is important, but here is an observation:
function is_reg64() has the following code:

  ...
  if (class == BPF_ALU64 || class == BPF_JMP ||
  /* BPF_END always use BPF_ALU class. */
  (class == BPF_ALU && op == BPF_END && insn->imm == 64))
  return true;
  ...

It probably has to be updated but I'm not sure how:
- either check insn->imm == 64 for ALU64 as well;
- or just update the comment, given that instruction always sets all 64-bits.

I think we can remove insn->imm == 64 from condition.
But I need to double check.


---
  arch/x86/net/bpf_jit_comp.c |  1 +
  kernel/bpf/core.c           | 14 ++++++++++++++
  kernel/bpf/verifier.c       |  2 +-
  3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a740a1a6e71d..adda5e7626b4 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1322,6 +1322,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
  			break;
case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU64 | BPF_END | BPF_FROM_LE:
  			switch (imm32) {
  			case 16:
  				/* Emit 'ror %ax, 8' to swap lower 2 bytes */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index fe648a158c9e..86bb412fee39 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1524,6 +1524,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
  	INSN_3(ALU64, DIV,  X),			\
  	INSN_3(ALU64, MOD,  X),			\
  	INSN_2(ALU64, NEG),			\
+	INSN_3(ALU64, END, TO_LE),		\
  	/*   Immediate based. */		\
  	INSN_3(ALU64, ADD,  K),			\
  	INSN_3(ALU64, SUB,  K),			\
@@ -1845,6 +1846,19 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
  			break;
  		}
  		CONT;
+	ALU64_END_TO_LE:
+		switch (IMM) {
+		case 16:
+			DST = (__force u16) __swab16(DST);
+			break;
+		case 32:
+			DST = (__force u32) __swab32(DST);
+			break;
+		case 64:
+			DST = (__force u64) __swab64(DST);
+			break;
+		}
+		CONT;
/* CALL */
  	JMP_CALL:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5fee9f24cb5e..22ba0744547b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13036,7 +13036,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
  		} else {
  			if (insn->src_reg != BPF_REG_0 || insn->off != 0 ||
  			    (insn->imm != 16 && insn->imm != 32 && insn->imm != 64) ||
-			    BPF_CLASS(insn->code) == BPF_ALU64) {
+			    (BPF_CLASS(insn->code) == BPF_ALU64 && BPF_SRC(insn->code) != BPF_K)) {
  				verbose(env, "BPF_END uses reserved fields\n");
  				return -EINVAL;
  			}





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux