Re: [PATCH v2 bpf-next] cpumap: bulk skb using netif_receive_skb_list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/15/21 8:05 AM, Daniel Borkmann wrote:
> On 4/13/21 6:22 PM, Lorenzo Bianconi wrote:
>> Rely on netif_receive_skb_list routine to send skbs converted from
>> xdp_frames in cpu_map_kthread_run in order to improve i-cache usage.
>> The proposed patch has been tested running xdp_redirect_cpu bpf sample
>> available in the kernel tree that is used to redirect UDP frames from
>> ixgbe driver to a cpumap entry and then to the networking stack.
>> UDP frames are generated using pkt_gen.
>>
>> $xdp_redirect_cpu  --cpu <cpu> --progname xdp_cpu_map0 --dev <eth>
>>
>> bpf-next: ~2.2Mpps
>> bpf-next + cpumap skb-list: ~3.15Mpps
>>
>> Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
>> ---
>> Changes since v1:
>> - fixed comment
>> - rebased on top of bpf-next tree
>> ---
>>   kernel/bpf/cpumap.c | 11 +++++------
>>   1 file changed, 5 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
>> index 0cf2791d5099..d89551a508b2 100644
>> --- a/kernel/bpf/cpumap.c
>> +++ b/kernel/bpf/cpumap.c
>> @@ -27,7 +27,7 @@
>>   #include <linux/capability.h>
>>   #include <trace/events/xdp.h>
>>   -#include <linux/netdevice.h>   /* netif_receive_skb_core */
>> +#include <linux/netdevice.h>   /* netif_receive_skb_list */
>>   #include <linux/etherdevice.h> /* eth_type_trans */
>>     /* General idea: XDP packets getting XDP redirected to another CPU,
>> @@ -257,6 +257,7 @@ static int cpu_map_kthread_run(void *data)
>>           void *frames[CPUMAP_BATCH];
>>           void *skbs[CPUMAP_BATCH];
>>           int i, n, m, nframes;
>> +        LIST_HEAD(list);
>>             /* Release CPU reschedule checks */
>>           if (__ptr_ring_empty(rcpu->queue)) {
>> @@ -305,7 +306,6 @@ static int cpu_map_kthread_run(void *data)
>>           for (i = 0; i < nframes; i++) {
>>               struct xdp_frame *xdpf = frames[i];
>>               struct sk_buff *skb = skbs[i];
>> -            int ret;
>>                 skb = __xdp_build_skb_from_frame(xdpf, skb,
>>                                xdpf->dev_rx);
>> @@ -314,11 +314,10 @@ static int cpu_map_kthread_run(void *data)
>>                   continue;
>>               }
>>   -            /* Inject into network stack */
>> -            ret = netif_receive_skb_core(skb);
>> -            if (ret == NET_RX_DROP)
>> -                drops++;
>> +            list_add_tail(&skb->list, &list);
>>           }
>> +        netif_receive_skb_list(&list);
>> +
>>           /* Feedback loop via tracepoint */
>>           trace_xdp_cpumap_kthread(rcpu->map_id, n, drops, sched,
>> &stats);
> 
> Given we stop counting drops with the netif_receive_skb_list(), we
> should then
> also remove drops from trace_xdp_cpumap_kthread(), imho, as otherwise it
> is rather
> misleading (as in: drops actually happening, but 0 are shown from the
> tracepoint).
> Given they are not considered stable API, I would just remove those to
> make it clear
> to users that they cannot rely on this counter anymore anyway.
> 

What's the visibility into drops then? Seems like it would be fairly
easy to have netif_receive_skb_list return number of drops.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux