Hi Jens, maybe not strictly related, but maybe it is... > I recently tried to analyze the performance of Samba using io-uring. > > I was using ubuntu 20.04 with the 5.10.0-1023-oem kernel, which is based on v5.10.25, see: > https://kernel.ubuntu.com/git/kernel-ppa/mirror/ubuntu-oem-5.10-focal.git/log/?h=oem-5.10-prep > trace-cmd is at version 2.8.3-4build1. > > In order to find the bottleneck I tried to use (trace-cmd is at version 2.8.3-4build1): > > trace-cmd -e all -P ${pid_of_io_uring_worker} > > As a result the server was completely dead immediately. > > I tried to reproduce this in a virtual machine (inside virtualbox). > > I used a modified 'io_uring-cp' that loops forever, see: > https://github.com/metze-samba/liburing/commit/5e98efed053baf03521692e786c1c55690b04d8e > > When I run './io_uring-cp-forever link-cp.c file', > I see a 'io_wq_manager' and a 'io_wqe_worker-0' kernel thread, > while './io_uring-cp-forever link-cp.c file' as well as 'io_wqe_worker-0' > consume about 25% cpu each. While doing the tests with 5.12-rc8, I somehow triggered a backtrace on the VM console: it contains: ... io_issue_seq... ... io_wq_submit_work io_worker_handle_work io_wqe_worker ... io_worker_handle_work ... RIP: 0100:trace_event_buffer_reserve+0xe5/0x150 Here's a screenshot of it:https://www.samba.org/~metze/io_issue_sqe_trace_event_buffer_reserve-5.12-rc8-backtrace.png I don't know what the last action was, that I did before it happened. metze