Previous discussion at: https://lore.kernel.org/bpf/7ba9b492-8a08-a1d0-9c6e-03be4b8e5e07@xxxxxx/T/#t Previous approach tries to use existing per-map looks like bpf_map_{get_next_key, lookup_elem, update_elem, delete_elem} to implement a batching process. It has a series drawback when the prev_key used by bpf_map_get_next_key() is not in hash table. In that case, as the hash table has no idea where the `prev_key` has been placed in the bucket before deletion, currently, it returns the first key. This makes batch processing may see duplicated elements, or in worst case if the hash table has heavy update/delete, the batch processing may never finish. This RFC patch set implements bucket based batching for hashtab. That is, for lookup/delete, either the whole bucket is processed or none of elements in the bucket is processed. Forward progress is also guaranteed as long as user provides enough buffer. This RFC also serves as a base for discussion at upcoming LPC2019 BPF Microconference. Changelogs: v1 -> RFC v2: . To address the bpf_map_get_next_key() issue where if a key is not available the first key will be returned, implement per-map batch operations for hashtab/lru_hashtab, using bucket lock, as suggested by Alexei. Cc: Jakub Kicinski <jakub.kicinski@xxxxxxxxxxxxx> Cc: Brian Vazquez <brianvv@xxxxxxxxxx> Cc: Stanislav Fomichev <sdf@xxxxxxxxxx> Yonghong Song (2): bpf: adding map batch processing support tools/bpf: test bpf_map_lookup_and_delete_batch() include/linux/bpf.h | 9 + include/uapi/linux/bpf.h | 22 ++ kernel/bpf/hashtab.c | 324 ++++++++++++++++++ kernel/bpf/syscall.c | 68 ++++ tools/include/uapi/linux/bpf.h | 22 ++ tools/lib/bpf/bpf.c | 59 ++++ tools/lib/bpf/bpf.h | 13 + tools/lib/bpf/libbpf.map | 4 + .../map_tests/map_lookup_and_delete_batch.c | 155 +++++++++ 9 files changed, 676 insertions(+) create mode 100644 tools/testing/selftests/bpf/map_tests/map_lookup_and_delete_batch.c -- 2.17.1