On (Thu) 29 Dec 2011 [18:15:23], Michael S. Tsirkin wrote: > On Thu, Dec 22, 2011 at 04:58:33PM +0530, Amit Shah wrote: > > Remove all the vqs, disable napi and detach from the netdev on > > hibernation. > > > > Re-create vqs after restoring from a hibernated image, re-enable napi > > and re-attach the netdev. This keeps networking working across > > hibernation. > > > > Signed-off-by: Amit Shah <amit.shah@xxxxxxxxxx> > > --- > > drivers/net/virtio_net.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ > > 1 files changed, 46 insertions(+), 0 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 7a2a5bf..b31670f 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -1159,6 +1159,48 @@ static void __devexit virtnet_remove(struct virtio_device *vdev) > > free_netdev(vi->dev); > > } > > > > +#ifdef CONFIG_PM > > +static int virtnet_freeze(struct virtio_device *vdev) > > +{ > > + struct virtnet_info *vi = vdev->priv; > > + > > + virtqueue_disable_cb(vi->rvq); > > + virtqueue_disable_cb(vi->svq); > > + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ)) > > + virtqueue_disable_cb(vi->cvq); > > + > > + netif_device_detach(vi->dev); > > + cancel_delayed_work_sync(&vi->refill); > > + > > + if (netif_running(vi->dev)) > > + napi_disable(&vi->napi); > > + > > + remove_vq_common(vi); > > + > > + return 0; > > +} > > + > > +static int virtnet_restore(struct virtio_device *vdev) > > +{ > > + struct virtnet_info *vi = vdev->priv; > > + int err; > > + > > + err = init_vqs(vi); > > + if (err) > > + return err; > > + > > + if (netif_running(vi->dev)) > > Can virtnet close run at this point? > If yes it's a bug. Userspace is suspended, and kthreads are frozen. If virtnet_close was called before suspend, it would have already succeeded in disabling napi. > > + virtnet_napi_enable(vi); > > + > > + netif_device_attach(vi->dev); > > + > > Not that open/close start/stop refill, > I wonder whether this is a problem. > For example, can virtnet_close run at this point, > and cancel refill? Same as above > > > + if (!try_fill_recv(vi, GFP_KERNEL)) > > + schedule_delayed_work(&vi->refill, 0); > > This needs to be switched to non reentrant wq too? Subsequent invocations could create problems? Note that this will be the first item to be queued in the workqueue for the refill work. Amit _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization