> Just a guess - Josef, is the eventfd for the ring fd itself? yes via eventfd_write we want to wake up/unblock io_uring_enter(IORING_ENTER_GETEVENTS), and the read eventfd event is submitted every time each ring fd in netty has one eventfd On Sun, 20 Dec 2020 at 17:14, Jens Axboe <axboe@xxxxxxxxx> wrote: > > On 12/20/20 6:00 AM, Pavel Begunkov wrote: > > On 20/12/2020 07:13, Josef wrote: > >>> Guys, do you share rings between processes? Explicitly like sending > >>> io_uring fd over a socket, or implicitly e.g. sharing fd tables > >>> (threads), or cloning with copying fd tables (and so taking a ref > >>> to a ring). > >> > >> no in netty we don't share ring between processes > >> > >>> In other words, if you kill all your io_uring applications, does it > >>> go back to normal? > >> > >> no at all, the io-wq worker thread is still running, I literally have > >> to restart the vm to go back to normal(as far as I know is not > >> possible to kill kernel threads right?) > >> > >>> Josef, can you test the patch below instead? Following Jens' idea it > >>> cancels more aggressively when a task is killed or exits. It's based > >>> on [1] but would probably apply fine to for-next. > >> > >> it works, I run several tests with eventfd read op async flag enabled, > >> thanks a lot :) you are awesome guys :) > > > > Thanks for testing and confirming! Either we forgot something in > > io_ring_ctx_wait_and_kill() and it just can't cancel some requests, > > or we have a dependency that prevents release from happening. > > Just a guess - Josef, is the eventfd for the ring fd itself? > > BTW, the io_wq_cancel_all() in io_ring_ctx_wait_and_kill() needs to go. > We should just use targeted cancelation - that's cleaner, and the > cancel all will impact ATTACH_WQ as well. Separate thing to fix, though. > > -- > Jens Axboe > -- Josef