Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 11, 2023 at 9:35 AM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>
[...]
>
> The suspect part is really:
>
> >      8.98%  mc-worker        [kernel.vmlinux]          [k] page_counter_cancel
> >             |
> >              --8.97%--page_counter_cancel
> >                        |
> >                         --8.97%--page_counter_uncharge
> >                                   drain_stock
> >                                   __refill_stock
> >                                   refill_stock
> >                                   |
> >                                    --8.91%--try_charge_memcg
> >                                              mem_cgroup_charge_skmem
> >                                              |
> >                                               --8.91%--__sk_mem_raise_allocated
> >                                                         __sk_mem_schedule
>
> Shakeel, networking has a per-cpu cache, of +/- 1MB.
>
> Even with asymmetric alloc/free, this would mean that a 100Gbit NIC
> would require something like 25,000
> operations on the shared cache line per second.
>
> Hardly an issue I think.
>
> memcg does not seem to have an equivalent strategy ?

memcg has +256KiB per-cpu cache (note the absence of '-'). However it
seems like Cathy already tested with 4MiB (1024 page batch) which is
comparable to networking per-cpu cache (i.e. 2MiB window) and still
see the issue. Additionally this is a single machine test (no NIC),
so, I am kind of contemplating between (1) this is not real world
workload and thus ignore or (2) implement asymmetric charge/uncharge
strategy for memcg.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux