On Wed, May 29, 2024 at 10:36 PM Bernd Schubert <bschubert@xxxxxxx> wrote: > > Most of the performance improvements > with fuse-over-io-uring for synchronous requests is the possibility > to run processing on the submitting cpu core and to also wake > the submitting process on the same core - switching between > cpu cores. > > Signed-off-by: Bernd Schubert <bschubert@xxxxxxx> > --- > fs/fuse/dev.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c > index c7fd3849a105..851c5fa99946 100644 > --- a/fs/fuse/dev.c > +++ b/fs/fuse/dev.c > @@ -333,7 +333,10 @@ void fuse_request_end(struct fuse_req *req) > spin_unlock(&fc->bg_lock); > } else { > /* Wake up waiter sleeping in request_wait_answer() */ > - wake_up(&req->waitq); > + if (fuse_per_core_queue(fc)) > + __wake_up_on_current_cpu(&req->waitq, TASK_NORMAL, NULL); > + else > + wake_up(&req->waitq); Would it be possible to apply this idea for regular FUSE connection? What would happen if some (buggy or malicious) userspace FUSE server uses sched_setaffinity(2) to run only on a subset of active CPUs? > } > > if (test_bit(FR_ASYNC, &req->flags)) > > -- > 2.40.1 > >