[PATCH bpf-next 0/1] arm64: Add BPF exception tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The following patch adds support for BPF_PROBE_MEM on arm64. The
implementation is simple but I wanted to give a bit of background first.
If you're familiar with recent BPF development you can skip to the patch
(or fact-check the following blurb).

BPF programs used for tracing can inspect any of the traced function's
arguments and follow pointers in struct members. Traditionally the BPF
program would get a struct pt_regs as argument and cast the register
values to the appropriate struct pointer. The BPF verifier would mandate
that any memory access uses the bpf_probe_read() helper, to suppress
page faults (see samples/bpf/tracex1_kern.c).

With BPF Type Format embedded into the kernel (CONFIG_DEBUG_INFO_BTF),
the verifier can now check the type of any access performed by a BPF
program. It rejects for example programs that cast to a different
structure and perform out-of-bounds accesses, or programs that attempt
to dereference something that isn't a pointer, or that hasn't gone
through a NULL check.

As this makes tracing programs safer, the verifier now allows loading
programs that access struct members without bpf_probe_read(). It is
however still possible to trigger page faults. For example in the
following example with which I've tested this patch, the verifier does
not mandate a NULL check for the second-level pointer:

/*
 * From tools/testing/selftests/bpf/progs/bpf_iter_task.c
 * dump_task() is called for each task.
 */
SEC("iter/task")
int dump_task(struct bpf_iter__task *ctx)
{
	struct seq_file *seq = ctx->meta->seq;
	struct task_struct *task = ctx->task;

	/* Program would be rejected without this check */
	if (task == NULL)
		return 0;

	/*
	 * However the verifier does not currently mandate
	 * checking task->mm, and the following faults for kernel
	 * threads.
	 */
	BPF_SEQ_PRINTF(seq, "pid=%d vm=%d", task->pid, task->mm->total_vm);
	return 0;
}

Even if it checked this case, the verifier couldn't guarantee that all
accesses are safe since kernel structures could in theory contain
garbage or error pointers. So to allow fast access without
bpf_probe_read(), a JIT implementation must support BPF exception
tables. For each access to a BTF pointer, the JIT generates an entry
into an exception table appended to the BPF program. If the access
faults at runtime, the handler skips the faulting instruction. The
example above will display vm=0 for kernel threads.

See also
* The original implementation on x86
  https://lore.kernel.org/bpf/20191016032505.2089704-1-ast@xxxxxxxxxx/
* The s390 implementation
  https://lore.kernel.org/bpf/20200715233301.933201-1-iii@xxxxxxxxxxxxx/

Jean-Philippe Brucker (1):
  arm64: bpf: Add BPF exception tables

 arch/arm64/include/asm/extable.h |  3 ++
 arch/arm64/mm/extable.c          | 11 ++--
 arch/arm64/net/bpf_jit_comp.c    | 93 +++++++++++++++++++++++++++++---
 3 files changed, 98 insertions(+), 9 deletions(-)

-- 
2.27.0




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux