> https://github.com/intel/host-int > > In particular, we should use spin locks at least for this map in the > program intmd_xdp_ksink.c: > https://github.com/intel/host-int/blob/main/src/xdp/intmd_xdp_ksink.c# > L27-L31 > If I may, I would suggest using percpu hash maps and aggregating the > stats from the same flow_key in your userspace daemon. That way you > can avoid spinlocks completely as it models one key to n values, where > n is the number of CPUs. You can even leverage batching if your map > has a considerable amount of keys[1], which in my experience can > handle large maps without noticeable overhead. > > Pedro > > [1] https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c Thanks for the suggestion, Pedro. For the EBPF programs I have in mind, they are doing things like inserting a per-application-flow sequence number into new headers in each packet in a source host, and then maintaining state in the receiver hosts for packets that have those sequence numbers added, to detect whether there are packet drops in the network, i.e. some sequence numbers are never received. I know that per-cpu maps exist in EBPF, and they are perfect when all you want to do is to maintain things like packet or byte counters, or counters for some other events, because the per-cpu entries can be combined by adding their counts together. However, for our use case, I do not see any effective way to make use of per-cpu maps, and still perform our desired packet processing functions. Hence the desire for using spin locks. Thanks, Andy Fingerhut