On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote: > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie > > <michael.christie@xxxxxxxxxx> wrote: > > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > > > <michael.christie@xxxxxxxxxx> wrote: > > > >> > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > > > struct vhost_run_worker_info { > > > > struct timespec *timeout; > > > > sigset_t *sigmask; > > > > > > > > /* List of virtqueues to process */ > > > > unsigned nvqs; > > > > unsigned vqs[]; > > > > }; > > > > > > > > /* This blocks until the timeout is reached, a signal is received, or > > > > the vhost device is destroyed */ > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > > > > > As you can see, userspace isn't involved with dealing with the > > > > requests. It just acts as a thread donor to the vhost driver. > > > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > > > penalty of switching into the kernel, copying in the arguments, etc. > > > > > > I didn't get this part. Why have the timeout? When the timeout expires, > > > does userspace just call right back down to the kernel or does it do > > > some sort of processing/operation? > > > > > > You could have your worker function run from that ioctl wait for a > > > signal or a wake up call from the vhost_work/poll functions. > > > > An optional timeout argument is common in blocking interfaces like > > poll(2), recvmmsg(2), etc. > > > > Although something can send a signal to the thread instead, > > implementing that in an application is more awkward than passing a > > struct timespec. > > > > Compared to other blocking calls we don't expect > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will > > rarely be used and can be dropped from the interface. > > > > BTW the code I posted wasn't a carefully thought out proposal :). The > > details still need to be considered and I'm going to be offline for > > the next week so maybe someone else can think it through in the > > meantime. > > One final thought before I'm offline for a week. If > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance > then it's hard to support poll-mode (busy waiting) workers because > each device instance consumes a whole CPU. If we stick to an interface > where the kernel manages the worker threads then it's easier to share > workers between devices for polling. Yes that is the reason vhost did its own reason in the first place. I am vaguely thinking about poll(2) or a similar interface, which can wait for an event on multiple FDs. > I have CCed Stefano Garzarella, who is looking at similar designs for > vDPA software device implementations. > > Stefan _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization