On 2020/01/09 6:35, John Fastabend wrote:
Now that we depend on rcu_call() and synchronize_rcu() to also wait
for preempt_disabled region to complete the rcu read critical section
in __dev_map_flush() is no longer relevant.
These originally ensured the map reference was safe while a map was
also being free'd. But flush by new rules can only be called from
preempt-disabled NAPI context. The synchronize_rcu from the map free
path and the rcu_call from the delete path will ensure the reference
here is safe. So lets remove the rcu_read_lock and rcu_read_unlock
pair to avoid any confusion around how this is being protected.
If the rcu_read_lock was required it would mean errors in the above
logic and the original patch would also be wrong.
Fixes: 0536b85239b84 ("xdp: Simplify devmap cleanup")
Signed-off-by: John Fastabend <john.fastabend@xxxxxxxxx>
---
kernel/bpf/devmap.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index f0bf525..0129d4a 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -378,10 +378,8 @@ void __dev_map_flush(void)
struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list);
struct xdp_bulk_queue *bq, *tmp;
- rcu_read_lock();
list_for_each_entry_safe(bq, tmp, flush_list, flush_node)
bq_xmit_all(bq, XDP_XMIT_FLUSH);
- rcu_read_unlock();
I introduced this lock because some drivers have assumption that
.ndo_xdp_xmit() is called under RCU. (commit 86723c864063)
Maybe devmap deletion logic does not need this anymore, but is it
OK to drivers?
Toshiaki Makita