On Mon, Nov 18, 2019 at 5:38 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > > This work adds program tracking to prog array maps. This is needed such > that upon prog array updates/deletions we can fix up all programs which > make use of this tail call map. We add ops->map_poke_{un,}track() helpers > to maps to maintain the list of programs and ops->map_poke_run() for > triggering the actual update. bpf_array_aux is extended to contain the > list head and poke_mutex in order to serialize program patching during > updates/deletions. bpf_free_used_maps() will untrack the program shortly > before dropping the reference to the map. > > The prog_array_map_poke_run() is triggered during updates/deletions and > walks the maintained prog list. It checks in their poke_tabs whether the > map and key is matching and runs the actual bpf_arch_text_poke() for > patching in the nop or new jmp location. Depending on the type of update, > we use one of BPF_MOD_{NOP_TO_JUMP,JUMP_TO_NOP,JUMP_TO_JUMP}. > > Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx> > --- > include/linux/bpf.h | 36 ++++++++++ > kernel/bpf/arraymap.c | 151 +++++++++++++++++++++++++++++++++++++++++- > kernel/bpf/core.c | 9 ++- > 3 files changed, 193 insertions(+), 3 deletions(-) > [...] > #endif /* _LINUX_BPF_H */ > diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c > index 5be12db129cc..d2b559c6659e 100644 > --- a/kernel/bpf/arraymap.c > +++ b/kernel/bpf/arraymap.c > @@ -586,10 +586,14 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file, > if (IS_ERR(new_ptr)) > return PTR_ERR(new_ptr); > > + bpf_map_poke_lock(map); > old_ptr = xchg(array->ptrs + index, new_ptr); > + if (map->ops->map_poke_run) > + map->ops->map_poke_run(map, index, old_ptr, new_ptr); > + bpf_map_poke_unlock(map); so this is a bit subtle, if I understand correctly. I was originally going to suggest that if no map->ops->map_poke_run is set, then bpf_map_pole_{lock,unlock} shouldn't be called at all. But then I realized that this creates a race, where xchg can happen in different order than map_poke_runs. Am I right? If yes, I wonder if it will be better to express this logic more explicitly as below, to avoid someone else "optimizing" the code later: if (map->ops->map_poke_run) { bpf_map_poke_lock(map); old_ptr = xchg(array->ptrs + index, new_ptr); bpf_map_poke_unlock(map); } else { old_ptr = xchg(array->ptrs + index, new_ptr); } This will make it more apparent that something different is happing when poke tracking is supported by a map. Am I overthinking this? > + > if (old_ptr) > map->ops->map_fd_put_ptr(old_ptr); > - > return 0; > } > [...] > +static void prog_array_map_poke_untrack(struct bpf_map *map, > + struct bpf_prog_aux *prog_aux) > +{ > + struct prog_poke_elem *elem, *tmp; > + struct bpf_array_aux *aux; > + > + aux = container_of(map, struct bpf_array, map)->aux; > + mutex_lock(&aux->poke_mutex); > + list_for_each_entry_safe(elem, tmp, &aux->poke_progs, list) { > + if (elem->aux == prog_aux) { > + list_del_init(&elem->list); > + kfree(elem); break; ? > + } > + } > + mutex_unlock(&aux->poke_mutex); > +} > + [...] > + > + ret = bpf_arch_text_poke(poke->ip, type, > + old ? (u8 *)old->bpf_func + > + poke->adj_off : NULL, > + new ? (u8 *)new->bpf_func + > + poke->adj_off : NULL); nit: extract old/new address calculation, so it's not multi-line wrapped? It's a bit hard to follow. > + BUG_ON(ret < 0 && ret != -EINVAL); > + } > + } > +} > + [...]