On Wed, Oct 27, 2021 at 10:55:04AM +0800, Jason Wang wrote: > On Tue, Oct 26, 2021 at 11:45 PM Stefan Hajnoczi <stefanha@xxxxxxxxxx> wrote: > > > > On Tue, Oct 26, 2021 at 01:37:14PM +0800, Jason Wang wrote: > > > > > > 在 2021/10/22 下午1:19, Mike Christie 写道: > > > > This patch allows userspace to create workers and bind them to vqs. You > > > > can have N workers per dev and also share N workers with M vqs. > > > > > > > > Signed-off-by: Mike Christie <michael.christie@xxxxxxxxxx> > > > > > > > > > A question, who is the best one to determine the binding? Is it the VMM > > > (Qemu etc) or the management stack? If the latter, it looks to me it's > > > better to expose this via sysfs? > > > > A few options that let the management stack control vhost worker CPU > > affinity: > > > > 1. The management tool opens the vhost device node, calls > > ioctl(VHOST_SET_VRING_WORKER), sets up CPU affinity, and then passes > > the fd to the VMM. In this case the VMM is still able to call the > > ioctl, which may be undesirable from an attack surface perspective. > > Yes, and we can't do post or dynamic configuration afterwards after > the VM is launched? Yes, at least it's a little risky for the management stack to keep the vhost fd open and make ioctl calls while the VMM is using it. > > > > 2. The VMM calls ioctl(VHOST_SET_VRING_WORKER) itself and the management > > tool queries the vq:worker details from the VMM (e.g. a new QEMU QMP > > query-vhost-workers command similar to query-iothreads). The > > management tool can then control CPU affinity on the vhost worker > > threads. > > > > (This is how CPU affinity works in QEMU and libvirt today.) > > Then we also need a "bind-vhost-workers" command. The VMM doesn't but the management tool does. Stefan
Attachment:
signature.asc
Description: PGP signature