Re: [PATCH v4 1/2] io_uring: avoid whole io_wq_work copy for requests completed inline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/05/2020 17:12, Xiaoguang Wang wrote:
>> A v5 with those two things would be ready to commit.
> First thanks for figuring out this bug.
> Indeed, I had run test cases in liburing/test multiple rounds before sending
> V4 patches, and all test cases pass, no kernel issues occurred, here is my kernel
> build config, seems that I had kasan enabled.
> [lege@localhost linux]$ cat .config | grep KASAN
> CONFIG_KASAN_SHADOW_OFFSET=0xdffffc0000000000
> CONFIG_HAVE_ARCH_KASAN=y
> CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
> CONFIG_CC_HAS_KASAN_GENERIC=y
> CONFIG_KASAN=y
> CONFIG_KASAN_GENERIC=y
> CONFIG_KASAN_OUTLINE=y
> # CONFIG_KASAN_INLINE is not set
> CONFIG_KASAN_STACK=1
> # CONFIG_KASAN_VMALLOC is not set
> # CONFIG_TEST_KASAN is not set
> But I don't know why I did't reproduce the bug, sorry.

Hmm, can't find an option to poison allocated mem. Try to fill req->work
with garbage by hand (just after allocation), should help to reproduce.

> 
> Are you fine below changes?
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 12296ce3e8b9..2a3a02838f7b 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -907,9 +907,10 @@ static void io_file_put_work(struct work_struct *work);
>  static inline void io_req_init_async(struct io_kiocb *req,
>                         void (*func)(struct io_wq_work **))
>  {
> -       if (req->flags & REQ_F_WORK_INITIALIZED)
> -               req->work.func = func;
> -       else {
> +       if (req->flags & REQ_F_WORK_INITIALIZED) {
> +               if (!req->work.func)

->func should be initialised after alloc then, if not already.

> +                       req->work.func = func;
> +       } else {
>                 req->work = (struct io_wq_work){ .func = func };
>                 req->flags |= REQ_F_WORK_INITIALIZED;
>         }
> @@ -2920,6 +2921,8 @@ static int __io_splice_prep(struct io_kiocb *req,
>                 return ret;
>         req->flags |= REQ_F_NEED_CLEANUP;
> 
> +       /* Splice will be punted aync, so initialize io_wq_work firstly_*/
> +       io_req_init_async(req, io_wq_submit_work);
>         if (!S_ISREG(file_inode(sp->file_in)->i_mode))
>                 req->work.flags |= IO_WQ_WORK_UNBOUND;
> 
> @@ -3592,6 +3595,9 @@ static int io_statx(struct io_kiocb *req, bool force_nonblock)
> 
>  static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>  {
> +        /* Close may be punted aync, so initialize io_wq_work firstly */

s/aync/async/

> +       io_req_init_async(req, io_wq_submit_work);
> +
>         /*
>          * If we queue this for async, it must not be cancellable. That would
>          * leave the 'file' in an undeterminate state.
> 
> Regards,
> Xiaoguang Wang
> 
> 
>>

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux