On 24/12/05 08:41AM, Anton Protopopov wrote: > On 24/12/04 10:08AM, Andrii Nakryiko wrote: > > On Wed, Dec 4, 2024 at 4:19 AM Anton Protopopov <aspsk@xxxxxxxxxxxxx> wrote: > > > > > > On 24/12/03 01:25PM, Andrii Nakryiko wrote: > > > > On Tue, Dec 3, 2024 at 5:48 AM Anton Protopopov <aspsk@xxxxxxxxxxxxx> wrote: > > > > > > > > > > The fd_array attribute of the BPF_PROG_LOAD syscall may contain a set > > > > > of file descriptors: maps or btfs. This field was introduced as a > > > > > sparse array. Introduce a new attribute, fd_array_cnt, which, if > > > > > present, indicates that the fd_array is a continuous array of the > > > > > corresponding length. > > > > > > > > > > If fd_array_cnt is non-zero, then every map in the fd_array will be > > > > > bound to the program, as if it was used by the program. This > > > > > functionality is similar to the BPF_PROG_BIND_MAP syscall, but such > > > > > maps can be used by the verifier during the program load. > > > > > > > > > > Signed-off-by: Anton Protopopov <aspsk@xxxxxxxxxxxxx> > > > > > --- > > > > > include/uapi/linux/bpf.h | 10 ++++ > > > > > kernel/bpf/syscall.c | 2 +- > > > > > kernel/bpf/verifier.c | 98 ++++++++++++++++++++++++++++------ > > > > > tools/include/uapi/linux/bpf.h | 10 ++++ > > > > > 4 files changed, 104 insertions(+), 16 deletions(-) > > > > > > > > > > > > > [...] > > > > > > > > > +/* > > > > > + * The add_fd_from_fd_array() is executed only if fd_array_cnt is non-zero. In > > > > > + * this case expect that every file descriptor in the array is either a map or > > > > > + * a BTF. Everything else is considered to be trash. > > > > > + */ > > > > > +static int add_fd_from_fd_array(struct bpf_verifier_env *env, int fd) > > > > > +{ > > > > > + struct bpf_map *map; > > > > > + CLASS(fd, f)(fd); > > > > > + int ret; > > > > > + > > > > > + map = __bpf_map_get(f); > > > > > + if (!IS_ERR(map)) { > > > > > + ret = __add_used_map(env, map); > > > > > + if (ret < 0) > > > > > + return ret; > > > > > + return 0; > > > > > + } > > > > > + > > > > > + /* > > > > > + * Unlike "unused" maps which do not appear in the BPF program, > > > > > + * BTFs are visible, so no reason to refcnt them now > > > > > > > > What does "BTFs are visible" mean? I find this behavior surprising, > > > > tbh. Map is added to used_maps, but BTF is *not* added to used_btfs? > > > > Why? > > > > > > This functionality is added to catch maps, and work with them during > > > verification, which aren't otherwise referenced by program code. The > > > actual application is those "instructions set" maps for static keys. > > > All other objects are "visible" during verification. > > > > That's your specific intended use case, but API is semantically more > > generic and shouldn't tailor to your specific interpretation on how it > > will/should be used. I think this is a landmine to add reference to > > just BPF maps and not to BTF objects, we won't be able to retrofit the > > proper and uniform treatment later without extra flags or backwards > > compatibility breakage. > > > > Even though we don't need extra "detached" BTF objects associated with > > BPF program, right now, I can anticipate some interesting use case > > where we might want to attach additional BTF objects to BPF programs > > (for whatever reasons, BTFs are a convenient bag of strings and > > graph-based types, so could be useful for extra > > debugging/metadata/whatever information). > > > > So I can see only two ways forward. Either we disable BTFs in fd_array > > if fd_array_cnt>0, which will prevent its usage from light skeleton, > > so not great. Or we bump refcount both BPF maps and BTFs in fd_array. > > > > > > The latter seems saner and I don't think is a problem at all, we > > already have used_btfs that function similarly to used_maps. > > This makes total sense to treat all BPF objects in fd_array the same > way. With BTFs the problem is that, currently, a btf fd can end up > either in used_btfs or kfunc_btf_tab. I will take a look at how easy > it is to merge those two. So, currently during program load BTFs are parsed from file descriptors and are stored in two places: env->used_btfs and env->prog->aux->kfunc_btf_tab: 1) env->used_btfs populated only when a DW load with the (src_reg == BPF_PSEUDO_BTF_ID) flag set is performed 2) kfunc_btf_tab is populated by __find_kfunc_desc_btf(), and the source is attr->fd_array[offset]. The kfunc_btf_tab is sorted by offset to allow faster search So, to merge them something like this might be done: 1) If fd_array_cnt != 0, then on load create a [sorted by offset] table "used_btfs", formatted similar to kfunc_btf_tab in (2) above. 2) On program load change (1) to add a btf to this new sorted used_btfs. As there is no corresponding offset, just use offset=-1 (not literally like this, as bsearch() wants unique keys, so by offset=-1 an array of btfs, aka, old used_maps, should be stored) Looks like this, conceptually, doesn't change things too much: kfuncs btfs will still be searchable in log(n) time, the "normal" btfs will still be searched in used_btfs in linear time. (The other way is to just allow kfunc btfs to be loaded from fd_array if fd_array_cnt != 0, as it is done now, but as you've mentioned before, you had other use cases in mind, so this won't work.) > > > > > > > > + */ > > > > > + if (!IS_ERR(__btf_get_by_fd(f))) > > > > > + return 0; > > > > > + > > > > > + verbose(env, "fd %d is not pointing to valid bpf_map or btf\n", fd); > > > > > + return PTR_ERR(map); > > > > > +} > > > > > + > > > > > +static int process_fd_array(struct bpf_verifier_env *env, union bpf_attr *attr, bpfptr_t uattr) > > > > > +{ > > > > > + size_t size = sizeof(int); > > > > > + int ret; > > > > > + int fd; > > > > > + u32 i; > > > > > + > > > > > + env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel); > > > > > + > > > > > + /* > > > > > + * The only difference between old (no fd_array_cnt is given) and new > > > > > + * APIs is that in the latter case the fd_array is expected to be > > > > > + * continuous and is scanned for map fds right away > > > > > + */ > > > > > + if (!attr->fd_array_cnt) > > > > > + return 0; > > > > > + > > > > > + for (i = 0; i < attr->fd_array_cnt; i++) { > > > > > + if (copy_from_bpfptr_offset(&fd, env->fd_array, i * size, size)) > > > > > > > > potential overflow in `i * size`? Do we limit fd_array_cnt anywhere to > > > > less than INT_MAX/4? > > > > > > Right. So, probably cap to (UINT_MAX/size)? > > > > either that or use check_mul_overflow() > > Ok, will fix it, thanks. On the second look, there's no overflow here, as (int) * (size_t) is expanded by C to (size_t), and argument is also (size_t). However, maybe this is still makes sense to restrict the maximum size of fd_array to something like (1 << 16). (The number of unique fds in the end will be ~(MAX_USED_MAPS + MAX_USED_BTFS + MAX_KFUNC_BTFS).) > > > > > > > > + return -EFAULT; > > > > > + > > > > > + ret = add_fd_from_fd_array(env, fd); > > > > > + if (ret) > > > > > + return ret; > > > > > + } > > > > > + > > > > > + return 0; > > > > > +} > > > > > + > > > > > > > > [...]