Re: [PATCH] bpf: use count for prealloc hashtab too

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 15, 2021 at 11:04 AM Chengming Zhou
<zhouchengming@xxxxxxxxxxxxx> wrote:
>
> We only use count for kmalloc hashtab not for prealloc hashtab, because
> __pcpu_freelist_pop() return NULL when no more elem in pcpu freelist.
>
> But the problem is that __pcpu_freelist_pop() will traverse all CPUs and
> spin_lock for all CPUs to find there is no more elem at last.
>
> We encountered bad case on big system with 96 CPUs that alloc_htab_elem()
> would last for 1ms. This patch use count for prealloc hashtab too,
> avoid traverse and spin_lock for all CPUs in this case.
>
> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>

It's not clear from the commit log what you're solving.
The atomic inc/dec in critical path of prealloc maps hurts performance.
That's why it's not used.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux