On 2020/11/18 下午2:57, Mike Christie wrote:
On 11/17/20 11:17 PM, Jason Wang wrote:
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
The following kernel patches were made over Michael's vhost branch:
https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$
and the vhost-scsi bug fix patchset:
https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$
And the qemu patch was made over the qemu master branch.
vhost-scsi currently supports multiple queues with the num_queues
setting, but we end up with a setup where the guest's scsi/block
layer can do a queue per vCPU and the layers below vhost can do
a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
but all IO gets set on and completed on a single vhost-scsi thread.
After 2 - 4 vqs this becomes a bottleneck.
This patchset allows us to create a worker thread per IO vq, so we
can better utilize multiple CPUs with the multiple queues. It
implments Jason's suggestion to create the initial worker like
normal, then create the extra workers for IO vqs with the
VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
How does userspace find out the tids and set their CPU affinity?
What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't
really "enable" or "disable" the vq, requests are processed regardless.
Actually I think it should do the real "enable/disable" that tries to follow the virtio spec.
What does real mean here?
I think it means when a vq is disabled, vhost won't process any request
from that virtqueue.
For the vdpa enable call for example, would it be like
ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like mlx5_vdpa_set_vq_ready
where it can do some more work in the disable case?
For vDPA, it would be more complicated.
E.g for IFCVF, it just delay the setting of queue_enable when it get
DRIVER_OK. Technically it can passthrough the queue_enable to the
hardware as what mlx5e did.
For net and something like ifcvf_vdpa_set_vq_ready's design would we have
vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have some helper
vhost_vq_is_enabled() and some code to detect if userspace supports the new ioctl.
Yes, vhost support backend capability. When userspace negotiate the new
capability, we should depend on SET_VRING_ENABLE, if not we can do
vhost_vq_is_enable().
And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? What is done
for disable then?
It needs more thought, but the question is not specific to
SET_VRING_ENABLE. Consider guest may zero ring address as well.
For disabling, we can simply flush the work and disable all the polls.
It doesn't seem to buy a lot of new functionality. Is it just
so we follow the spec?
My understanding is that, since spec defines queue_enable, we should
support it in vhost. And we can piggyback the delayed vq creation with
this feature. Otherwise we will duplicate the function if we want to
support queue_enable.
Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in vhost_ring_ioctl
when we get the new ioctl we would call into the drivers and have it start queues
and stop queues? For enable, what we you do for net for this case?
Net is something different, we can simply use SET_BACKEND to disable a
specific virtqueue without introducing new ioctls. Notice that, net mq
is kind of different with scsi which have a per queue pair vhost device,
and the API allows us to set backend for a specific virtqueue.
For disable,
would you do something like vhost_net_stop_vq (we don't free up anything allocated
in vhost_vring_ioctl calls, but we can stop what we setup in the net driver)?
It's up to you, if you think you should free the resources you can do that.
Is this useful for the current net mq design or is this for something like where
you would do one vhost net device with multiple vqs?
I think SET_VRING_ENABLE is more useful for SCSI since it have a model
of multiple vqs per vhost device.
My issue/convern is that in general these calls seems useful, but we don't really
need them for scsi because vhost scsi is already stuck creating vqs like how it does
due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where
we just set some bit, then the new ioctl does not give us a lot. It's just an extra
check and extra code.
And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going
to happen a lot where the admin is going to want to remove vqs from a running device.
In this case, qemu may just disable the queues of vhost-scsi via
SET_VRING_ENABLE and then we can free resources?
And for both addition/removal for scsi we would need code in virtio scsi to handle
hot plug removal/addition of a queue and then redoing the multiqueue mappings which
would be difficult to add with no one requesting it.
Thanks