Re: [PATCH net v2] veth: xdp: use head instead of hard_start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 31 Mar 2020 15:16:22 +0900
Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> wrote:

> On 2020/03/31 15:06, Mao Wenan wrote:
> > xdp.data_hard_start is equal to first address of
> > struct xdp_frame, which is mentioned in
> > convert_to_xdp_frame(). But the pointer hard_start
> > in veth_xdp_rcv_one() is 32 bytes offset of frame,
> > so it should use head instead of hard_start to
> > set xdp.data_hard_start. Otherwise, if BPF program
> > calls helper_function such as bpf_xdp_adjust_head, it
> > will be confused for xdp_frame_end.  
> 
> I think you should discuss this more with Jesper before
> submitting v2.
> He does not like this to be included now due to merge conflict risk.
> Basically I agree with him that we don't need to hurry with this fix.
> 
> Toshiaki Makita
> 
> > 
> > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > Signed-off-by: Mao Wenan <maowenan@xxxxxxxxxx>
> > ---
> >   v2: add fixes tag, as well as commit log.
> >   drivers/net/veth.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index d4cbb9e8c63f..5ea550884bf8 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> >   		struct xdp_buff xdp;
> >   		u32 act;
> >   
> > -		xdp.data_hard_start = hard_start;
> > +		xdp.data_hard_start = head;
> >   		xdp.data = frame->data;
> >   		xdp.data_end = frame->data + frame->len;
> >   		xdp.data_meta = frame->data - frame->metasize;
> >   

Below is the patch that I have in my queue.  I've added a Reported-by
tag to give you some credit, even-though I already had plans to fix
this, as part of my XDP frame_sz work.


[PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames

When native XDP redirect into a veth device, the frame arrives in the
xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
which can run a new XDP bpf_prog on the packet. Doing so requires
converting xdp_frame to xdp_buff, but the tricky part is that
xdp_frame memory area is located in the top (data_hard_start) memory
area that xdp_buff will point into.

The current code tried to protect the xdp_frame area, by assigning
xdp_buff.data_hard_start past this memory. This results in 32 bytes
less headroom to expand into via BPF-helper bpf_xdp_adjust_head().

This protect step is actually not needed, because BPF-helper
bpf_xdp_adjust_head() already reserve this area, and don't allow
BPF-prog to expand into it. Thus, it is safe to point data_hard_start
directly at xdp_frame memory area.

Cc: Toshiaki Makita <makita.toshiaki@xxxxxxxxxxxxx>
Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Reported-by: Mao Wenan <maowenan@xxxxxxxxxx>
Signed-off-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
---
 drivers/net/veth.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 8cdc4415fa70..2edc04a8ab8e 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -493,13 +493,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 					struct veth_xdp_tx_bq *bq)
 {
 	void *hard_start = frame->data - frame->headroom;
-	void *head = hard_start - sizeof(struct xdp_frame);
 	int len = frame->len, delta = 0;
 	struct xdp_frame orig_frame;
 	struct bpf_prog *xdp_prog;
 	unsigned int headroom;
 	struct sk_buff *skb;
 
+	/* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
+	hard_start -= sizeof(struct xdp_frame);
+
 	rcu_read_lock();
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (likely(xdp_prog)) {
@@ -521,7 +523,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			break;
 		case XDP_TX:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
 				trace_xdp_exception(rq->dev, xdp_prog, act);
@@ -533,7 +534,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			goto xdp_xmit;
 		case XDP_REDIRECT:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
 				frame = &orig_frame;
@@ -555,7 +555,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 	rcu_read_unlock();
 
 	headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
-	skb = veth_build_skb(head, headroom, len, 0);
+	skb = veth_build_skb(hard_start, headroom, len, 0);
 	if (!skb) {
 		xdp_return_frame(frame);
 		goto err;


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux