Re: [PATCHv7 bpf-next 1/4] bpf: run devmap xdp_prog on flush instead of bulk enqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 15, 2021 at 11:22:19AM +0200, Toke Høiland-Jørgensen wrote:
> Hangbin Liu <liuhangbin@xxxxxxxxx> writes:
> 
> > On Wed, Apr 14, 2021 at 05:17:11PM -0700, Martin KaFai Lau wrote:
> >> >  static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
> >> >  {
> >> >  	struct net_device *dev = bq->dev;
> >> > -	int sent = 0, err = 0;
> >> > +	int sent = 0, drops = 0, err = 0;
> >> > +	unsigned int cnt = bq->count;
> >> > +	int to_send = cnt;
> >> >  	int i;
> >> >  
> >> > -	if (unlikely(!bq->count))
> >> > +	if (unlikely(!cnt))
> >> >  		return;
> >> >  
> >> > -	for (i = 0; i < bq->count; i++) {
> >> > +	for (i = 0; i < cnt; i++) {
> >> >  		struct xdp_frame *xdpf = bq->q[i];
> >> >  
> >> >  		prefetch(xdpf);
> >> >  	}
> >> >  
> >> > -	sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
> >> > +	if (bq->xdp_prog) {
> >> bq->xdp_prog is used here
> >> 
> >> > +		to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev);
> >> > +		if (!to_send)
> >> > +			goto out;
> >> > +
> >> > +		drops = cnt - to_send;
> >> > +	}
> >> > +
> >> 
> >> [ ... ]
> >> 
> >> >  static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
> >> > -		       struct net_device *dev_rx)
> >> > +		       struct net_device *dev_rx, struct bpf_prog *xdp_prog)
> >> >  {
> >> >  	struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
> >> >  	struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq);
> >> > @@ -412,18 +466,22 @@ static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
> >> >  	/* Ingress dev_rx will be the same for all xdp_frame's in
> >> >  	 * bulk_queue, because bq stored per-CPU and must be flushed
> >> >  	 * from net_device drivers NAPI func end.
> >> > +	 *
> >> > +	 * Do the same with xdp_prog and flush_list since these fields
> >> > +	 * are only ever modified together.
> >> >  	 */
> >> > -	if (!bq->dev_rx)
> >> > +	if (!bq->dev_rx) {
> >> >  		bq->dev_rx = dev_rx;
> >> > +		bq->xdp_prog = xdp_prog;
> >> bp->xdp_prog is assigned here and could be used later in bq_xmit_all().
> >> How is bq->xdp_prog protected? Are they all under one rcu_read_lock()?
> >> It is not very obvious after taking a quick look at xdp_do_flush[_map].
> >> 
> >> e.g. what if the devmap elem gets deleted.
> >
> > Jesper knows better than me. From my veiw, based on the description of
> > __dev_flush():
> >
> > On devmap tear down we ensure the flush list is empty before completing to
> > ensure all flush operations have completed. When drivers update the bpf
> > program they may need to ensure any flush ops are also complete.
AFAICT, the bq->xdp_prog is not from the dev. It is from a devmap's elem.

> 
> Yeah, drivers call xdp_do_flush() before exiting their NAPI poll loop,
> which also runs under one big rcu_read_lock(). So the storage in the
> bulk queue is quite temporary, it's just used for bulking to increase
> performance :)
I am missing the one big rcu_read_lock() part.  For example, in i40e_txrx.c,
i40e_run_xdp() has its own rcu_read_lock/unlock().  dst->xdp_prog used to run
in i40e_run_xdp() and it is fine.

In this patch, dst->xdp_prog is run outside of i40e_run_xdp() where the
rcu_read_unlock() has already done.  It is now run in xdp_do_flush_map().
or I missed the big rcu_read_lock() in i40e_napi_poll()?

I do see the big rcu_read_lock() in mlx5e_napi_poll().



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux