On 11/7/22 12:50 AM, Xu Kuohai wrote:
From: Xu Kuohai <xukuohai@xxxxxxxxxx>
pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
free nodes for each CPU except the last initialized CPU, always making
the last CPU get fewer free nodes. For example, when nr_elems == 256
... free nodes for some cpus, and then possibly one cpu with fewer
nodes, followed by remaining cpus with 0 nodes.
and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
nodes, while in fact each CPU should get 8 nodes equally.
This patch initializes nr_elems / num_possible_cpus() free nodes for each
CPU firstly, and then allocates the remaining free nodes by one for each
CPU until no free nodes left.
Signed-off-by: Xu Kuohai <xukuohai@xxxxxxxxxx>
LGTM. Did you observe any performance issues?
Acked-by: Yonghong Song <yhs@xxxxxx>
---
kernel/bpf/percpu_freelist.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
index b6e7f5c5b9ab..89e84f7381cc 100644
--- a/kernel/bpf/percpu_freelist.c
+++ b/kernel/bpf/percpu_freelist.c
@@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
u32 nr_elems)
{
struct pcpu_freelist_head *head;
- int i, cpu, pcpu_entries;
+ int i, cpu, pcpu_entries, remain_entries;
+
+ pcpu_entries = nr_elems / num_possible_cpus();
+ remain_entries = nr_elems % num_possible_cpus();
- pcpu_entries = nr_elems / num_possible_cpus() + 1;
i = 0;
for_each_possible_cpu(cpu) {
+ int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
again:
head = per_cpu_ptr(s->freelist, cpu);
/* No locking required as this is not visible yet. */
@@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
buf += elem_size;
if (i == nr_elems)
break;
- if (i % pcpu_entries)
+ if (i < j)
goto again;
}
}