在 2021/6/16 下午8:57, Xuan Zhuo 写道:
On Wed, 16 Jun 2021 20:51:41 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
在 2021/6/16 下午6:19, Xuan Zhuo 写道:
+ * In this way, even if xsk has been unbundled with rq/sq, or a new xsk and
+ * rq/sq are bound, and a new virtnet_xsk_ctx_head is created. It will not
+ * affect the old virtnet_xsk_ctx to be recycled. And free all head and ctx when
+ * ref is 0.
This looks complicated and it will increase the footprint. Consider the
performance penalty and the complexity, I would suggest to use reset
instead.
Then we don't need to introduce such context.
I don't like this either. It is best if we can reset the queue, but then,
according to my understanding, the backend should also be supported
synchronously, so if you don't update the backend synchronously, you can't use
xsk.
Yes, actually, vhost-net support per vq suspending. The problem is that
we're lacking a proper API at virtio level.
Virtio-pci has queue_enable but it forbids writing zero to that.
I don’t think resetting the entire dev is a good solution. If you want to bind
xsk to 10 queues, you may have to reset the entire device 10 times. I don’t
think this is a good way. But the current spec does not support reset single
queue, so I chose the current solution.
Jason, what do you think we are going to do? Realize the reset function of a
single queue?
Yes, it's the best way. Do you want to work on that?
Of course, I am very willing to continue this work. Although users must upgrade
the backend to use virtio-net + xsk in the future, this makes the situation a
bit troublesome.
I will complete the kernel modification as soon as possible, but I am not
familiar with the process of submitting the spec patch. Can you give me some
guidance and where should I send the spec patch.
Subscribe the virtio dev mailing list [1] and send the spec path there.
Thanks
[1]
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=virtio#feedback
Thanks.
We can start from the spec patch, and introduce it as basic facility and
implement it in the PCI transport first.
Thanks
Looking forward to your reply!!!
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization