On 12/13/23 3:05 AM, Hou Tao wrote:
On 12/13/2023 6:30 AM, Yonghong Song wrote:
Typically for percpu map element or data structure, once allocated,
most operations are lookup or in-place update. Deletion are really
rare. Currently, for percpu data strcture, 4 elements will be
refilled if the size is <= 256. Let us just do with one element
for percpu data. For example, for size 256 and 128 cpus, the
potential saving will be 3 * 256 * 128 * 128 = 12MB.
Signed-off-by: Yonghong Song <yonghong.song@xxxxxxxxx>
---
kernel/bpf/memalloc.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 84987e97fd0a..a1d718ee264d 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -483,11 +483,15 @@ static void init_refill_work(struct bpf_mem_cache *c)
static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
{
+ int cnt = 1;
+
/* To avoid consuming memory assume that 1st run of bpf
* prog won't be doing more than 4 map_update_elem from
* irq disabled region
*/
Please update the comments accordingly.
Ack.
- alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
+ if (!c->percpu_size && c->unit_size <= 256)
+ cnt = 4;
+ alloc_bulk(c, cnt, cpu_to_node(cpu), false);
}
static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)