Re: [PATCH V3 2/3] vduse: suspend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/20/2024 10:30 PM, Jason Wang wrote:
On Mon, May 20, 2024 at 11:21 PM Steve Sistare
<steven.sistare@xxxxxxxxxx> wrote:

Support the suspend operation.  There is little to do, except flush to
guarantee no workers are running when suspend returns.

Signed-off-by: Steve Sistare <steven.sistare@xxxxxxxxxx>
---
  drivers/vdpa/vdpa_user/vduse_dev.c | 24 ++++++++++++++++++++++++
  1 file changed, 24 insertions(+)

diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73c89701fc9d..7dc46f771f12 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -472,6 +472,18 @@ static void vduse_dev_reset(struct vduse_dev *dev)
         up_write(&dev->rwsem);
  }

+static void vduse_flush_work(struct vduse_dev *dev)
+{
+       flush_work(&dev->inject);
+
+       for (int i = 0; i < dev->vq_num; i++) {
+               struct vduse_virtqueue *vq = dev->vqs[i];
+
+               flush_work(&vq->inject);
+               flush_work(&vq->kick);
+       }
+}
+
  static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx,
                                 u64 desc_area, u64 driver_area,
                                 u64 device_area)
@@ -724,6 +736,17 @@ static int vduse_vdpa_reset(struct vdpa_device *vdpa)
         return ret;
  }

+static int vduse_vdpa_suspend(struct vdpa_device *vdpa)
+{
+       struct vduse_dev *dev = vdpa_to_vduse(vdpa);
+
+       down_write(&dev->rwsem);
+       vduse_flush_work(dev);
+       up_write(&dev->rwsem);

Can this forbid the new work to be scheduled?

Are you suggesting I return an error below if the dev is suspended?
I can do that.

However, I now suspect this implementation of vduse_vdpa_suspend is not
complete in other ways, so I withdraw this patch pending future work.
Thanks for looking at it.

- Steve

static int vduse_dev_queue_irq_work(struct vduse_dev *dev,
                                     struct work_struct *irq_work,
                                     int irq_effective_cpu)
{
         int ret = -EINVAL;

         down_read(&dev->rwsem);
         if (!(dev->status & VIRTIO_CONFIG_S_DRIVER_OK))
                 goto unlock;

         ret = 0;
         if (irq_effective_cpu == IRQ_UNBOUND)
                 queue_work(vduse_irq_wq, irq_work);
         else
                 queue_work_on(irq_effective_cpu,
                       vduse_irq_bound_wq, irq_work);
unlock:
         up_read(&dev->rwsem);

         return ret;
}

Thanks

+
+       return 0;
+}
+
  static u32 vduse_vdpa_get_generation(struct vdpa_device *vdpa)
  {
         struct vduse_dev *dev = vdpa_to_vduse(vdpa);
@@ -806,6 +829,7 @@ static const struct vdpa_config_ops vduse_vdpa_config_ops = {
         .set_vq_affinity        = vduse_vdpa_set_vq_affinity,
         .get_vq_affinity        = vduse_vdpa_get_vq_affinity,
         .reset                  = vduse_vdpa_reset,
+       .suspend                = vduse_vdpa_suspend,
         .set_map                = vduse_vdpa_set_map,
         .free                   = vduse_vdpa_free,
  };
--
2.39.3






[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux