On 2/20/20 12:36 PM, Glauber Costa wrote: > On Thu, Feb 20, 2020 at 2:19 PM Glauber Costa <glauber@xxxxxxxxxxxx> wrote: >> >> On Thu, Feb 20, 2020 at 2:12 PM Jens Axboe <axboe@xxxxxxxxx> wrote: >>> >>> On 2/20/20 11:45 AM, Glauber Costa wrote: >>>> On Thu, Feb 20, 2020 at 12:28 PM Jens Axboe <axboe@xxxxxxxxx> wrote: >>>>> >>>>> On 2/20/20 9:52 AM, Glauber Costa wrote: >>>>>> On Thu, Feb 20, 2020 at 11:39 AM Jens Axboe <axboe@xxxxxxxxx> wrote: >>>>>>> >>>>>>> On 2/20/20 9:34 AM, Glauber Costa wrote: >>>>>>>> On Thu, Feb 20, 2020 at 11:29 AM Jens Axboe <axboe@xxxxxxxxx> wrote: >>>>>>>>> >>>>>>>>> On 2/20/20 9:17 AM, Jens Axboe wrote: >>>>>>>>>> On 2/20/20 7:19 AM, Glauber Costa wrote: >>>>>>>>>>> Hi there, me again >>>>>>>>>>> >>>>>>>>>>> Kernel is at 043f0b67f2ab8d1af418056bc0cc6f0623d31347 >>>>>>>>>>> >>>>>>>>>>> This test is easier to explain: it essentially issues a connect and a >>>>>>>>>>> shutdown right away. >>>>>>>>>>> >>>>>>>>>>> It currently fails due to no fault of io_uring. But every now and then >>>>>>>>>>> it crashes (you may have to run more than once to get it to crash) >>>>>>>>>>> >>>>>>>>>>> Instructions are similar to my last test. >>>>>>>>>>> Except the test to build is now "tests/unit/connect_test" >>>>>>>>>>> Code is at git@xxxxxxxxxx:glommer/seastar.git branch io-uring-connect-crash >>>>>>>>>>> >>>>>>>>>>> Run it with ./build/release/tests/unit/connect_test -- -c1 >>>>>>>>>>> --reactor-backend=uring >>>>>>>>>>> >>>>>>>>>>> Backtrace attached >>>>>>>>>> >>>>>>>>>> Perfect thanks, I'll take a look! >>>>>>>>> >>>>>>>>> Haven't managed to crash it yet, but every run complains: >>>>>>>>> >>>>>>>>> got to shutdown of 10 with refcnt: 2 >>>>>>>>> Refs being all dropped, calling forget for 10 >>>>>>>>> terminate called after throwing an instance of 'fmt::v6::format_error' >>>>>>>>> what(): argument index out of range >>>>>>>>> unknown location(0): fatal error: in "unixdomain_server": signal: SIGABRT (application abort requested) >>>>>>>>> >>>>>>>>> Not sure if that's causing it not to fail here. >>>>>>>> >>>>>>>> Ok, that means it "passed". (I was in the process of figuring out >>>>>>>> where I got this wrong when I started seeing the crashes) >>>>>>> >>>>>>> Can you do, in your kernel dir: >>>>>>> >>>>>>> $ gdb vmlinux >>>>>>> [...] >>>>>>> (gdb) l *__io_queue_sqe+0x4a >>>>>>> >>>>>>> and see what it says? >>>>>> >>>>>> 0xffffffff81375ada is in __io_queue_sqe (fs/io_uring.c:4814). >>>>>> 4809 struct io_kiocb *linked_timeout; >>>>>> 4810 struct io_kiocb *nxt = NULL; >>>>>> 4811 int ret; >>>>>> 4812 >>>>>> 4813 again: >>>>>> 4814 linked_timeout = io_prep_linked_timeout(req); >>>>>> 4815 >>>>>> 4816 ret = io_issue_sqe(req, sqe, &nxt, true); >>>>>> 4817 >>>>>> 4818 /* >>>>>> >>>>>> (I am not using timeouts, just async_cancel) >>>>> >>>>> Can't seem to hit it here, went through thousands of iterations... >>>>> I'll keep trying. >>>>> >>>>> If you have time, you can try and enable CONFIG_KASAN=y and see if >>>>> you can hit it with that. >>>> >>>> I can >>>> >>>> Attaching full dmesg >>> >>> Can you try the latest? It's sha d8154e605f84. > > 10 runs, no crashes. > > Thanks! Great! Thanks for reporting and the quick testing. -- Jens Axboe