Re: [RFC] virtio_net: add local_bh_disable() around u64_stats_update_begin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2018/10/18 下午4:47, Sebastian Andrzej Siewior wrote:
On 2018-10-17 14:48:02 [+0800], Jason Wang wrote:
On 2018/10/17 上午9:13, Toshiaki Makita wrote:
I'm not sure what condition triggered this warning.
If the seqlock is acquired once in softirq and then in process context
again it is enough evidence for lockdep to trigger this warning.

Toshiaki Makita

Or maybe NAPI is enabled unexpectedly somewhere?

Btw, the schedule_delayed_work() in virtnet_open() is also suspicious, if
the work is executed before virtnet_napi_enable(), there will be a deadloop
for napi_disable().
something like this? It is also likely if it runs OOM on queue 2, it
will run OOM again on queue 3.

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index fbcfb4d272336..87d6ec4765270 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1263,22 +1263,22 @@ static void refill_work(struct work_struct *work)
  {
  	struct virtnet_info *vi =
  		container_of(work, struct virtnet_info, refill.work);
-	bool still_empty;
+	int still_empty = 0;
  	int i;
for (i = 0; i < vi->curr_queue_pairs; i++) {
  		struct receive_queue *rq = &vi->rq[i];
napi_disable(&rq->napi);
-		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
+		if (!try_fill_recv(vi, rq, GFP_KERNEL))
+		    still_empty++;
  		virtnet_napi_enable(rq->vq, &rq->napi);
-
-		/* In theory, this can happen: if we don't get any buffers in
-		 * we will *never* try to fill again.
-		 */
-		if (still_empty)
-			schedule_delayed_work(&vi->refill, HZ/2);
  	}
+	/* In theory, this can happen: if we don't get any buffers in
+	 * we will *never* try to fill again.
+	 */
+	if (still_empty)
+		schedule_delayed_work(&vi->refill, HZ/2);
  }


I think this part is not a must or an independent optimization?


Thanks


static int virtnet_receive(struct receive_queue *rq, int budget,
@@ -1407,12 +1407,13 @@ static int virtnet_open(struct net_device *dev)
  {
  	struct virtnet_info *vi = netdev_priv(dev);
  	int i, err;
+	int need_refill = 0;
for (i = 0; i < vi->max_queue_pairs; i++) {
  		if (i < vi->curr_queue_pairs)
  			/* Make sure we have some buffers: if oom use wq. */
  			if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
-				schedule_delayed_work(&vi->refill, 0);
+				need_refill++;
err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i);
  		if (err < 0)
@@ -1428,6 +1429,8 @@ static int virtnet_open(struct net_device *dev)
  		virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
  		virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi);
  	}
+	if (need_refill)
+		schedule_delayed_work(&vi->refill, 0);
return 0;
  }
@@ -2236,6 +2239,7 @@ static int virtnet_restore_up(struct virtio_device *vdev)
  {
  	struct virtnet_info *vi = vdev->priv;
  	int err, i;
+	int need_refill = 0;
err = init_vqs(vi);
  	if (err)
@@ -2246,13 +2250,15 @@ static int virtnet_restore_up(struct virtio_device *vdev)
  	if (netif_running(vi->dev)) {
  		for (i = 0; i < vi->curr_queue_pairs; i++)
  			if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
-				schedule_delayed_work(&vi->refill, 0);
+				need_refill++;
for (i = 0; i < vi->max_queue_pairs; i++) {
  			virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
  			virtnet_napi_tx_enable(vi, vi->sq[i].vq,
  					       &vi->sq[i].napi);
  		}
+		if (need_refill)
+			schedule_delayed_work(&vi->refill, 0);
  	}
netif_device_attach(vi->dev);

Thanks
Sebastian
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux