Re: [PATCH net-next v3 6/8] bpf: cpumap: switch to napi_skb_cache_get_bulk()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alexander Lobakin <aleksander.lobakin@xxxxxxxxx> writes:

> Now that cpumap uses GRO, which drops unused skb heads to the NAPI
> cache, use napi_skb_cache_get_bulk() to try to reuse cached entries
> and lower MM layer pressure. Always disable the BH before checking and
> running the cpumap-pinned XDP prog and don't re-enable it in between
> that and allocating an skb bulk, as we can access the NAPI caches only
> from the BH context.
> The better GRO aggregates packets, the less new skbs will be allocated.
> If an aggregated skb contains 16 frags, this means 15 skbs were returned
> to the cache, so next 15 skbs will be built without allocating anything.
>
> The same trafficgen UDP GRO test now shows:
>
>                 GRO off   GRO on
> threaded GRO    2.3       4         Mpps
> thr bulk GRO    2.4       4.7       Mpps
> diff            +4        +17       %
>
> Comparing to the baseline cpumap:
>
> baseline        2.7       N/A       Mpps
> thr bulk GRO    2.4       4.7       Mpps
> diff            -11       +74       %
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
> Tested-by: Daniel Xu <dxu@xxxxxxxxx>

Reviewed-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux