Re: [PATCH RFC 00/14] vhost: multiple worker support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 04, 2021 at 03:11:37PM -0500, Mike Christie wrote:
On 5/4/21 10:56 AM, Stefano Garzarella wrote:
Hi Mike,

On Wed, Apr 28, 2021 at 05:36:59PM -0500, Mike Christie wrote:
The following patches apply over mst's vhost branch and were tested
againt that branch and also mkp's 5.13 branch which has some vhost-scsi
changes.

These patches allow us to support multiple vhost workers per device. I
ended up just doing Stefan's original idea where userspace has the
kernel create a worker and we pass back the pid. This has the benefit
over the workqueue and userspace thread approach where we only have
one'ish code path in the kernel.

The kernel patches here allow us to then do N workers device and also
share workers across devices.

I took a first look and left a few comments.

In general it looks good to me, I'm just worried if it's okay to use the kthread pid directly or it's better to use an internal id.

For example I think if this can be affected by the pid namespaces or some security issues.
I admit I don't know much about this topic, so this might be silly.


Ah yeah, that was my other TODO. I did the lazy loop and left the
"hash on pid" TODO in patch 11 because I was not sure. I thought it
was ok, because only the userspace process that does the
VHOST_SET_OWNER call can do the other ioctls.

Oh I see.


I can do pid or use a xarray/ida/idr type if struct and pass that
id between user/kernel space if it's preferred.


Let's see what others say, it was just a doubt I had while looking at the patches.



I included a patch for qemu so you can get an idea of how it works.

TODO:
-----
- polling
- Allow sharing workers across devices. Kernel support is added and I
hacked up userspace to test, but I'm still working on a sane way to
manage it in userspace.
- Bind to specific CPUs. Commands like "virsh emulatorpin" work with
these patches and allow us to set the group of vhost threads to different
CPUs. But we can also set a specific vq's worker to run on a CPU.
- I'm handling old kernel by just checking for EPERM. Does this require
a feature?

Do you mean when the new ioctls are not available? We do not return ENOIOCTLCMD?
vhost_vring_ioctl returns -ENOIOCTLCMD but that does not get propagated to the app.
Check out the comment in include/linux/errno.h and fs/ioctl.c:vfs_ioctl() where
-ENOIOCTLCMD is caught and -ENOTTY is returned.

Ah right!


To make this reply a little complicated note that at the time when I wrote the code
I thought something was translating -ENOTTY to -EPERM but then after posting I
realized that ioctl() always returns -1 on error (I thought the -1 was a -EPERM
from the kernel). errno is set to -ENOTTY as expected when a ioctl is not
implemented, so the -EPERM references should be errno and -ENOTTY.


Thanks for the clarification.

However looking better maybe it would make sense to add a new VHOST_BACKEND_F_* feature bit for this functionality since we are adding 2 new ioctls to use together.

I saw that we used the VHOST_BACKEND_F_* only for the new IOTLB features, so maybe for this functionality the error returned by ioctl() could be enough. Let's see what others think.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux