Re: [PATCH v3 1/2] io_uring: avoid whole io_wq_work copy for requests completed inline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/05/2020 16:36, Xiaoguang Wang wrote:
> 
> The risk mainly comes from the union:
> union {
>     /*
>      * Only commands that never go async can use the below fields,
>      * obviously. Right now only IORING_OP_POLL_ADD uses them, and
>      * async armed poll handlers for regular commands. The latter
>      * restore the work, if needed.
>      */
>     struct {
>         struct callback_head    task_work;
>         struct hlist_node    hash_node;
>         struct async_poll    *apoll;
>     };
>     struct io_wq_work    work;
> };
> 
> 1, apoll and creds are in same memory offset, for 'async armed poll handlers' case,
> apoll will be used, that means creds will be overwrited. In patch "io_uring: avoid
> unnecessary io_wq_work copy for fast poll feature", I use REQ_F_WORK_INITIALIZED
> to control whether to do io_wq_work restore, then your below codes will break:

Yes, that's an example, which doesn't even consider the second patch. But great
that you anticipated the error. Unconditional partial copy/init probably would
solve the issue, but let's keep it aside for now.

> 
> static inline void io_req_work_drop_env(struct io_kiocb *req)
> {
>     /* always init'ed, put before REQ_F_WORK_INITIALIZED check */
>     if (req->work.creds) {
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Here req->work.creds will be invalid, or I still need to use some space
> to record original req->work.creds, and do creds restore.
> 
>         put_cred(req->work.creds);
>         req->work.creds = NULL;
>     }
>     if (!(req->flags & REQ_F_WORK_INITIALIZED))
>          return;
> 
> 2, For IORING_OP_POLL_ADD case, current mainline codes will use task_work and
> hash_node,
> 32 bytes, that means io_wq_work's member list, func, files and mm would be
> overwrited,
> but will not touch creds, it's safe now. But if we will add some new member to
> struct {
>     struct callback_head    task_work;
>     struct hlist_node    hash_node;
>     struct async_poll    *apoll;
> };
> say callback_head adds a new member, our check will still break.
> 
> 3. IMO, io_wq_work is just to describe needed running environment for reqs that
> will be
> punted to io-wq, for reqs submitted and completed inline should not touch this
> struct
> from software design view, and current io_kiocb is 240 bytes, and a new pointer

Instead, it stores an entity in 2 different places, adding yet another
thing/state to keep in mind. Arguable. I'd rather say -- neither one is better.
And that's why I like the simplicity of initialising it always.


> will be
> 248 bytes, still 4 cache lines for cache line 64 bytes.
> 
> 

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux