On 24/02/2025 23:23, Andrii Nakryiko wrote:
On Fri, Feb 21, 2025 at 2:14 PM Mykyta Yatsenko
<mykyta.yatsenko5@xxxxxxxxx> wrote:
From: Mykyta Yatsenko <yatsenko@xxxxxxxx>
Introducing bpf_dynptr_copy kfunc allowing copying data from one dynptr to
another. This functionality is useful in scenarios such as capturing XDP
data to a ring buffer.
The implementation consists of 4 branches:
* A fast branch for contiguous buffer capacity in both source and
destination dynptrs
* 3 branches utilizing __bpf_dynptr_read and __bpf_dynptr_write to copy
data to/from non-contiguous buffer
Signed-off-by: Mykyta Yatsenko <yatsenko@xxxxxxxx>
---
kernel/bpf/helpers.c | 55 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
LGTM, a bit of unnecessary code I pointed out, but I like how minimal
and clean all this looks, and completely reused pre-existing APIs.
Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 6600aa4492ec..264afa0effb0 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2770,6 +2770,60 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
return 0;
}
+/**
+ * bpf_dynptr_copy() - Copy data from one dynptr to another.
+ * @dst_ptr: Destination dynptr - where data should be copied to
+ * @dst_off: Offset into the destination dynptr
+ * @src_ptr: Source dynptr - where data should be copied from
+ * @src_off: Offset into the source dynptr
+ * @size: Length of the data to copy from source to destination
+ *
+ * Copies data from source dynptr to destination dynptr
+ */
+__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u32 dst_off,
+ struct bpf_dynptr *src_ptr, u32 src_off, u32 size)
+{
+ struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
+ struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
+ void *src_slice, *dst_slice;
+ char buf[256];
+ u32 off;
+
+ src_slice = bpf_dynptr_slice(src_ptr, src_off, NULL, size);
+ dst_slice = bpf_dynptr_slice_rdwr(dst_ptr, dst_off, NULL, size);
+
+ if (src_slice && dst_slice) {
+ memmove(dst_slice, src_slice, size);
+ return 0;
+ }
+
+ if (src_slice)
+ return __bpf_dynptr_write(dst, dst_off, src_slice, size, 0);
+
+ if (dst_slice)
+ return __bpf_dynptr_read(dst_slice, size, src, src_off, 0);
+
+ if (bpf_dynptr_check_off_len(dst, dst_off, size) ||
+ bpf_dynptr_check_off_len(src, src_off, size))
__bpf_dynptr_read() and __bpf_dynptr_write() do these checks, so
either it's unnecessary and we should keep all the sanity checking to
dynptr_{read,write}, OR we ensure __bpf_dynptr_read/write don't do
sanity checking every single time and we do full checking here, but
then we'll need to also check !dst->data ||
__bpf_dynptr_is_rdonly(dst)
I think for now, I'd keep all the sanity checking to read/write and
not over-optimize. So let's drop these checks?
I added this check to make the process of copying data chunk by chunk
more transactional/atomic,
trying to make the outcome of partial data copying less likely.
Other checks I assume would equally fail/pass for the 1st and last
chunk. Of course, we don't give
any atomicity guarantee here, but I thought size mismatch could be a
common case to handle.
pw-bot: cr
+ return -E2BIG;
+
+ off = 0;
+ while (off < size) {
+ u32 chunk_sz = min_t(u32, sizeof(buf), size - off);
+ int err = 0;
nit: unnecessary = 0 initialization, you are overwriting it immediately below
+
+ err = __bpf_dynptr_read(buf, chunk_sz, src, src_off + off, 0);
+ if (err)
+ return err;
+ err = __bpf_dynptr_write(dst, dst_off + off, buf, chunk_sz, 0);
+ if (err)
+ return err;
+
+ off += chunk_sz;
+ }
+ return 0;
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
@@ -3218,6 +3272,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
BTF_ID_FLAGS(func, bpf_dynptr_size)
BTF_ID_FLAGS(func, bpf_dynptr_clone)
+BTF_ID_FLAGS(func, bpf_dynptr_copy)
#ifdef CONFIG_NET
BTF_ID_FLAGS(func, bpf_modify_return_test_tp)
#endif
--
2.48.1