This is a note to let you know that I've just added the patch titled xen-netfront: pull on receive skb may need to happen earlier to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: xen-netfront-pull-on-receive-skb-may-need-to-happen-earlier.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 093b9c71b6e450e375f4646ba86faed0195ec7df Mon Sep 17 00:00:00 2001 From: Jan Beulich <JBeulich@xxxxxxxx> Date: Wed, 17 Jul 2013 08:09:37 +0100 Subject: xen-netfront: pull on receive skb may need to happen earlier From: Jan Beulich <JBeulich@xxxxxxxx> commit 093b9c71b6e450e375f4646ba86faed0195ec7df upstream. Due to commit 3683243b ("xen-netfront: use __pskb_pull_tail to ensure linear area is big enough on RX") xennet_fill_frags() may end up filling MAX_SKB_FRAGS + 1 fragments in a receive skb, and only reduce the fragment count subsequently via __pskb_pull_tail(). That's a result of xennet_get_responses() allowing a maximum of one more slot to be consumed (and intermediately transformed into a fragment) if the head slot has a size less than or equal to RX_COPY_THRESHOLD. Hence we need to adjust xennet_fill_frags() to pull earlier if we reached the maximum fragment count - due to the described behavior of xennet_get_responses() this guarantees that at least the first fragment will get completely consumed, and hence the fragment count reduced. In order to not needlessly call __pskb_pull_tail() twice, make the original call conditional upon the pull target not having been reached yet, and defer the newly added one as much as possible (an alternative would have been to always call the function right before the call to xennet_fill_frags(), but that would imply more frequent cases of needing to call it twice). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/net/xen-netfront.c | 31 +++++++++++++------------------ 1 file changed, 13 insertions(+), 18 deletions(-) --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -276,8 +276,7 @@ no_skb: break; } - __skb_fill_page_desc(skb, 0, page, 0, 0); - skb_shinfo(skb)->nr_frags = 1; + skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE); __skb_queue_tail(&np->rx_batch, skb); } @@ -822,7 +821,6 @@ static RING_IDX xennet_fill_frags(struct struct sk_buff_head *list) { struct skb_shared_info *shinfo = skb_shinfo(skb); - int nr_frags = shinfo->nr_frags; RING_IDX cons = np->rx.rsp_cons; struct sk_buff *nskb; @@ -831,19 +829,21 @@ static RING_IDX xennet_fill_frags(struct RING_GET_RESPONSE(&np->rx, ++cons); skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0]; - __skb_fill_page_desc(skb, nr_frags, - skb_frag_page(nfrag), - rx->offset, rx->status); + if (shinfo->nr_frags == MAX_SKB_FRAGS) { + unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to; - skb->data_len += rx->status; + BUG_ON(pull_to <= skb_headlen(skb)); + __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); + } + BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS); + + skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag), + rx->offset, rx->status, PAGE_SIZE); skb_shinfo(nskb)->nr_frags = 0; kfree_skb(nskb); - - nr_frags++; } - shinfo->nr_frags = nr_frags; return cons; } @@ -929,7 +929,8 @@ static int handle_incoming_queue(struct while ((skb = __skb_dequeue(rxq)) != NULL) { int pull_to = NETFRONT_SKB_CB(skb)->pull_to; - __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); + if (pull_to > skb_headlen(skb)) + __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); /* Ethernet work: Delayed to here as it peeks the header. */ skb->protocol = eth_type_trans(skb, dev); @@ -1015,16 +1016,10 @@ err: skb_shinfo(skb)->frags[0].page_offset = rx->offset; skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status); skb->data_len = rx->status; + skb->len += rx->status; i = xennet_fill_frags(np, skb, &tmpq); - /* - * Truesize is the actual allocation size, even if the - * allocation is only partially used. - */ - skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; - skb->len += skb->data_len; - if (rx->flags & XEN_NETRXF_csum_blank) skb->ip_summed = CHECKSUM_PARTIAL; else if (rx->flags & XEN_NETRXF_data_validated) Patches currently in stable-queue which might be from JBeulich@xxxxxxxx are queue-3.10/xen-netfront-pull-on-receive-skb-may-need-to-happen-earlier.patch queue-3.10/xen-blkback-check-device-permissions-before-allowing-op_discard.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html