On 6/21/22 3:17 PM, John Fastabend wrote: > Dave Marchevsky wrote: >> Add a benchmarks to demonstrate the performance cliff for local_storage >> get as the number of local_storage maps increases beyond current >> local_storage implementation's cache size. >> >> "sequential get" and "interleaved get" benchmarks are added, both of >> which do many bpf_task_storage_get calls on sets of task local_storage >> maps of various counts, while considering a single specific map to be >> 'important' and counting task_storage_gets to the important map >> separately in addition to normal 'hits' count of all gets. Goal here is >> to mimic scenario where a particular program using one map - the >> important one - is running on a system where many other local_storage >> maps exist and are accessed often. >> >> While "sequential get" benchmark does bpf_task_storage_get for map 0, 1, >> ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4 >> bpf_task_storage_gets for the important map for every 10 map gets. This >> is meant to highlight performance differences when important map is >> accessed far more frequently than non-important maps. >> >> A "hashmap control" benchmark is also included for easy comparison of >> standard bpf hashmap lookup vs local_storage get. The benchmark is >> similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH >> instead of local storage. Only one inner map is created - a hashmap >> meant to hold tid -> data mapping for all tasks. Size of the hashmap is >> hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these >> keys which are actually fetched as part of the benchmark is >> configurable. >> >> Addition of this benchmark is inspired by conversation with Alexei in a >> previous patchset's thread [0], which highlighted the need for such a >> benchmark to motivate and validate improvements to local_storage >> implementation. My approach in that series focused on improving >> performance for explicitly-marked 'important' maps and was rejected >> with feedback to make more generally-applicable improvements while >> avoiding explicitly marking maps as important. Thus the benchmark >> reports both general and important-map-focused metrics, so effect of >> future work on both is clear. >> >> Regarding the benchmark results. On a powerful system (Skylake, 20 >> cores, 256gb ram): >> >> Hashmap Control >> =============== >> num keys: 10 >> hashmap (control) sequential get: hits throughput: 20.900 ± 0.334 M ops/s, hits latency: 47.847 ns/op, important_hits throughput: 20.900 ± 0.334 M ops/s >> >> num keys: 1000 >> hashmap (control) sequential get: hits throughput: 13.758 ± 0.219 M ops/s, hits latency: 72.683 ns/op, important_hits throughput: 13.758 ± 0.219 M ops/s >> >> num keys: 10000 >> hashmap (control) sequential get: hits throughput: 6.995 ± 0.034 M ops/s, hits latency: 142.959 ns/op, important_hits throughput: 6.995 ± 0.034 M ops/s >> >> num keys: 100000 >> hashmap (control) sequential get: hits throughput: 4.452 ± 0.371 M ops/s, hits latency: 224.635 ns/op, important_hits throughput: 4.452 ± 0.371 M ops/s >> >> num keys: 4194304 >> hashmap (control) sequential get: hits throughput: 3.043 ± 0.033 M ops/s, hits latency: 328.587 ns/op, important_hits throughput: 3.043 ± 0.033 M ops/s >> > > Why is the hashmap lookup not constant with the number of keys? It looks > like its prepopulated without collisions so I wouldn't expect any > extra ops on the lookup side after looking at the code quickly. > > >> Local Storage >> ============= >> num_maps: 1 >> local_storage cache sequential get: hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s >> local_storage cache interleaved get: hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s >> >> num_maps: 10 >> local_storage cache sequential get: hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s >> local_storage cache interleaved get: hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s >> >> num_maps: 16 >> local_storage cache sequential get: hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s >> local_storage cache interleaved get: hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s >> >> num_maps: 17 >> local_storage cache sequential get: hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s >> local_storage cache interleaved get: hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s >> >> num_maps: 24 >> local_storage cache sequential get: hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s >> local_storage cache interleaved get: hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s >> >> num_maps: 32 >> local_storage cache sequential get: hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s >> local_storage cache interleaved get: hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s >> >> num_maps: 100 >> local_storage cache sequential get: hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s >> local_storage cache interleaved get: hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s >> >> num_maps: 1000 >> local_storage cache sequential get: hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s >> local_storage cache interleaved get: hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s >> >> Looking at the "sequential get" results, it's clear that as the >> number of task local_storage maps grows beyond the current cache size >> (16), there's a significant reduction in hits throughput. Note that >> current local_storage implementation assigns a cache_idx to maps as they >> are created. Since "sequential get" is creating maps 0..n in order and >> then doing bpf_task_storage_get calls in the same order, the benchmark >> is effectively ensuring that a map will not be in cache when the program >> tries to access it. >> >> For "interleaved get" results, important-map hits throughput is greatly >> increased as the important map is more likely to be in cache by virtue >> of being accessed far more frequently. Throughput still reduces as # >> maps increases, though. >> >> To get a sense of the overhead of the benchmark program, I >> commented out bpf_task_storage_get/bpf_map_lookup_elem in >> local_storage_bench.c and ran the benchmark on the same host as the >> 'real' run. Results: > > Also just checking the hash overhead was taken including the > urandom so we can pull that out of the cost. > > [...] > Yep, confirmed.