RE: [PATCHv4 bpf-next 2/4] xdp: extend xdp_redirect_map with broadcast support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hangbin Liu wrote:
> This patch add two flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to extend
> xdp_redirect_map for broadcast support.
> 
> Keep the general data path in net/core/filter.c and the native data
> path in kernel/bpf/devmap.c so we can use direct calls to get better
> performace.
> 
> Here is the performance result by using xdp_redirect_{map, map_multi} in
> sample/bpf and send pkts via pktgen cmd:
> ./pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -t 10 -s 64
> 
> There are some drop back as we need to loop the map and get each interface.
> 
> Version          | Test                                | Generic | Native
> 5.12 rc2         | redirect_map        i40e->i40e      |    2.0M |  9.8M
> 5.12 rc2         | redirect_map        i40e->veth      |    1.8M | 12.0M

Are these are 10gbps i40e ports? Sorry if I asked this earlier, maybe
add a note in the commit if another respin is needed.

> 5.12 rc2 + patch | redirect_map        i40e->i40e      |    2.0M |  9.6M
> 5.12 rc2 + patch | redirect_map        i40e->veth      |    1.7M | 12.0M
> 5.12 rc2 + patch | redirect_map multi  i40e->i40e      |    1.6M |  7.8M
> 5.12 rc2 + patch | redirect_map multi  i40e->veth      |    1.4M |  9.3M
> 5.12 rc2 + patch | redirect_map multi  i40e->mlx4+veth |    1.0M |  3.4M
> 
> Signed-off-by: Hangbin Liu <liuhangbin@xxxxxxxxx>
> 
> ---
> v4:
> a) add a new argument flag_mask to __bpf_xdp_redirect_map() filter out
> invalid map.
> b) __bpf_xdp_redirect_map() sets the map pointer if the broadcast flag
> is set and clears it if the flag isn't set
> c) xdp_do_redirect() does the READ_ONCE/WRITE_ONCE on ri->map to check
> if we should enqueue multi
> 
> v3:
> a) Rebase the code on Björn's "bpf, xdp: Restructure redirect actions".
>    - Add struct bpf_map *map back to struct bpf_redirect_info as we need
>      it for multicast.
>    - Add bpf_clear_redirect_map() back for devmap.c
>    - Add devmap_lookup_elem() as we need it in general path.
> b) remove tmp_key in devmap_get_next_obj()
> 
> v2: Fix flag renaming issue in v1
> ---
>  include/linux/bpf.h            |  22 ++++++
>  include/linux/filter.h         |  18 ++++-
>  include/net/xdp.h              |   1 +
>  include/uapi/linux/bpf.h       |  17 ++++-
>  kernel/bpf/cpumap.c            |   3 +-
>  kernel/bpf/devmap.c            | 133 ++++++++++++++++++++++++++++++++-
>  net/core/filter.c              |  97 +++++++++++++++++++++++-
>  net/core/xdp.c                 |  29 +++++++
>  net/xdp/xskmap.c               |   3 +-
>  tools/include/uapi/linux/bpf.h |  17 ++++-
>  10 files changed, 326 insertions(+), 14 deletions(-)
> 

[...]

>  static int cpu_map_btf_id;
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index 3980fb3bfb09..c8452c5f40f8 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -198,6 +198,7 @@ static void dev_map_free(struct bpf_map *map)
>  	list_del_rcu(&dtab->list);
>  	spin_unlock(&dev_map_lock);
>  
> +	bpf_clear_redirect_map(map);

Is this a bugfix? If its needed here wouldn't we also need it in the
devmap case.

>  	synchronize_rcu();
>  
>  	/* Make sure prior __dev_map_entry_free() have completed. */

[...]

> +
> +static struct bpf_dtab_netdev *devmap_get_next_obj(struct xdp_buff *xdp,
> +						   struct bpf_map *map,
> +						   u32 *key, u32 *next_key,
> +						   int ex_ifindex)
> +{
> +	struct bpf_dtab_netdev *obj;
> +	struct net_device *dev;
> +	u32 index;
> +	int err;
> +
> +	err = devmap_get_next_key(map, key, next_key);
> +	if (err)
> +		return NULL;
> +
> +	/* When using dev map hash, we could restart the hashtab traversal
> +	 * in case the key has been updated/removed in the mean time.
> +	 * So we may end up potentially looping due to traversal restarts
> +	 * from first elem.
> +	 *
> +	 * Let's use map's max_entries to limit the loop number.
> +	 */
> +	for (index = 0; index < map->max_entries; index++) {
> +		obj = devmap_lookup_elem(map, *next_key);
> +		if (!obj || dst_dev_is_ingress(obj, ex_ifindex))
> +			goto find_next;
> +
> +		dev = obj->dev;
> +
> +		if (!dev->netdev_ops->ndo_xdp_xmit)
> +			goto find_next;
> +
> +		err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data);
> +		if (unlikely(err))
> +			goto find_next;
> +
> +		return obj;
> +
> +find_next:
> +		key = next_key;
> +		err = devmap_get_next_key(map, key, next_key);
> +		if (err)
> +			break;
> +	}

I'm missing something. Either an elaborated commit message or comment
is probably needed. I've been looking at this block for 30 minutes and
can't see how we avoid sending duplicate frames on a single interface?
Can you check this code flow, 

  dev_map_enqueue_multi()
   for (;;) {
     next_obj = devmap_get_next_obj(...)
        for (index = 0; index < map->max_entries; index++) {
           obj = devmap_lookup_elem();
           if (!obj) goto find_next
           key = next_key;
           err = devmap_get_next_key() 
                  if (!key) goto find_first
                  for (i = 0; i < dtab->n_buckets; i++)
                     return *next <- now *next_key is point back
                                     at first entry
           // loop back through and find first obj and return that
        }
      bq_enqueue(...) // enqueue original obj
      obj = next_obj;
      key = next_key; 
      ...  // we are going to enqueue first obj, but how do we know
           // this hasn't already been sent? Presumably if we have
           // a delete in the hash table in the middle of a multicast
           // operation this might happen?
   }
     

> +
> +	return NULL;
> +}
> +
> +int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx,
> +			  struct bpf_map *map, bool exclude_ingress)
> +{
> +	struct bpf_dtab_netdev *obj = NULL, *next_obj = NULL;
> +	struct xdp_frame *xdpf, *nxdpf;
> +	u32 key, next_key;
> +	int ex_ifindex;
> +
> +	ex_ifindex = exclude_ingress ? dev_rx->ifindex : 0;
> +
> +	/* Find first available obj */
> +	obj = devmap_get_next_obj(xdp, map, NULL, &key, ex_ifindex);
> +	if (!obj)
> +		return -ENOENT;
> +
> +	xdpf = xdp_convert_buff_to_frame(xdp);
> +	if (unlikely(!xdpf))
> +		return -EOVERFLOW;
> +
> +	for (;;) {

A nit take it or not. These for (;;) loops always seem a bit odd to me
when we really don't want it to run forever. I prefer

        while (!next_obj)

but a matter of style I guess.

> +		/* Check if we still have one more available obj */
> +		next_obj = devmap_get_next_obj(xdp, map, &key, &next_key, ex_ifindex);
> +		if (!next_obj) {
> +			bq_enqueue(obj->dev, xdpf, dev_rx, obj->xdp_prog);
> +			return 0;
> +		}
> +
> +		nxdpf = xdpf_clone(xdpf);
> +		if (unlikely(!nxdpf)) {
> +			xdp_return_frame_rx_napi(xdpf);
> +			return -ENOMEM;
> +		}
> +
> +		bq_enqueue(obj->dev, nxdpf, dev_rx, obj->xdp_prog);
> +
> +		/* Deal with next obj */
> +		obj = next_obj;
> +		key = next_key;
> +	}
> +}
> +

Thanks,
John



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux