Martin KaFai Lau <kafai@xxxxxx> writes: > On Wed, Jun 09, 2021 at 12:33:13PM +0200, Toke Høiland-Jørgensen wrote: > [ ... ] > >> @@ -551,7 +551,8 @@ static void cpu_map_free(struct bpf_map *map) >> for (i = 0; i < cmap->map.max_entries; i++) { >> struct bpf_cpu_map_entry *rcpu; >> >> - rcpu = READ_ONCE(cmap->cpu_map[i]); >> + rcpu = rcu_dereference_check(cmap->cpu_map[i], >> + rcu_read_lock_bh_held()); > Is rcu_read_lock_bh_held() true during map_free()? Hmm, no, I guess not since that's called from a workqueue. Will fix! >> @@ -149,7 +152,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, >> u64 map_flags) >> { >> struct xsk_map *m = container_of(map, struct xsk_map, map); >> - struct xdp_sock *xs, *old_xs, **map_entry; >> + struct xdp_sock __rcu **map_entry; >> + struct xdp_sock *xs, *old_xs; >> u32 i = *(u32 *)key, fd = *(u32 *)value; >> struct xsk_map_node *node; >> struct socket *sock; >> @@ -179,7 +183,7 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, >> } >> >> spin_lock_bh(&m->lock); >> - old_xs = READ_ONCE(*map_entry); >> + old_xs = rcu_dereference_check(*map_entry, rcu_read_lock_bh_held()); > Is it actually protected by the m->lock at this point? True, can just add that to the check. >> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs, >> - struct xdp_sock **map_entry) >> + struct xdp_sock __rcu **map_entry) >> { >> spin_lock_bh(&map->lock); >> - if (READ_ONCE(*map_entry) == xs) { >> - WRITE_ONCE(*map_entry, NULL); >> + if (rcu_dereference(*map_entry) == xs) { > nit. rcu_access_pointer()? Yup.