Re: [PATCH v2 bpf-next 10/13] bpf: Add instructions for atomic[64]_[fetch_]sub

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 11/28/20 5:34 PM, Alexei Starovoitov wrote:
On Fri, Nov 27, 2020 at 09:35:07PM -0800, Yonghong Song wrote:


On 11/27/20 9:57 AM, Brendan Jackman wrote:
Including only interpreter and x86 JIT support.

x86 doesn't provide an atomic exchange-and-subtract instruction that
could be used for BPF_SUB | BPF_FETCH, however we can just emit a NEG
followed by an XADD to get the same effect.

Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx>
---
   arch/x86/net/bpf_jit_comp.c  | 16 ++++++++++++++--
   include/linux/filter.h       | 20 ++++++++++++++++++++
   kernel/bpf/core.c            |  1 +
   kernel/bpf/disasm.c          | 16 ++++++++++++----
   kernel/bpf/verifier.c        |  2 ++
   tools/include/linux/filter.h | 20 ++++++++++++++++++++
   6 files changed, 69 insertions(+), 6 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7431b2937157..a8a9fab13fcf 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -823,6 +823,7 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
   	/* emit opcode */
   	switch (atomic_op) {
+	case BPF_SUB:
   	case BPF_ADD:
   		/* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
   		EMIT1(simple_alu_opcodes[atomic_op]);
@@ -1306,8 +1307,19 @@ st:			if (is_imm8(insn->off))
   		case BPF_STX | BPF_ATOMIC | BPF_W:
   		case BPF_STX | BPF_ATOMIC | BPF_DW:
-			err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
-					  insn->off, BPF_SIZE(insn->code));
+			if (insn->imm == (BPF_SUB | BPF_FETCH)) {
+				/*
+				 * x86 doesn't have an XSUB insn, so we negate
+				 * and XADD instead.
+				 */
+				emit_neg(&prog, src_reg, BPF_SIZE(insn->code) == BPF_DW);
+				err = emit_atomic(&prog, BPF_ADD | BPF_FETCH,
+						  dst_reg, src_reg, insn->off,
+						  BPF_SIZE(insn->code));
+			} else {
+				err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
+						  insn->off, BPF_SIZE(insn->code));
+			}
   			if (err)
   				return err;
   			break;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 6186280715ed..a20a3a536bf5 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
   		.off   = OFF,					\
   		.imm   = BPF_ADD | BPF_FETCH })
+/* Atomic memory sub, *(uint *)(dst_reg + off16) -= src_reg */
+
+#define BPF_ATOMIC_SUB(SIZE, DST, SRC, OFF)			\
+	((struct bpf_insn) {					\
+		.code  = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC,	\
+		.dst_reg = DST,					\
+		.src_reg = SRC,					\
+		.off   = OFF,					\
+		.imm   = BPF_SUB })

Currently, llvm does not support XSUB, should we support it in llvm?
At source code, as implemented in JIT, user can just do a negate
followed by xadd.

I forgot we have BPF_NEG insn :)
Indeed it's probably easier to handle atomic_fetch_sub() builtin
completely on llvm side. It can generate bpf_neg followed by atomic_fetch_add.

Just tried. llvm selectiondag won't be able to automatically
convert atomic_fetch_sub to neg + atomic_fetch_add. So there
will be a need in BPFInstrInfo.td to match atomic_fetch_sub IR
pattern. I will experiment this together with xsub.

No need to burden verifier, interpreter and JITs with it.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux