Re: Follow up on UBD discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

>
>>>> Second, I'd like to share some ideas on UBD. I'm not sure if they are
>>>> reasonable so please figure out my mistakes.
>>>>
>>>> 1) UBD issues one sqe to commit last completed request and fetch a new
>>>> one. Then, blk-mq's queue_rq() issues a new UBD IO request and completes
>>>> one cqe for the fetch command. We have evaluated that io_submit_sqes()
>>>> costs some CPU and steps of building a new sqe may lower throughput.
>>>> Here I'd like to give a new solution: never submit sqe but trump up a
>>>> cqe(with information of new UBD IO request) when calling queue_rq(). I
>>>> am inspired by one io_uring flag: IORING_POLL_ADD_MULTI, with which a
>>>> user issues only one sqe for polling an fd and repeatedly gets multiple
>>>> cqes when new events occur. Dose this solution break the architecture of
>>>> UBD?
>>> But each cqe has to be associated with one sqe, if I understand
>>> correctly.
>> Yeah, for current io_uring implementation, it is. But if io_uring offers below
>> helper:
>> void io_gen_cqe_direct(struct file *file, u64 user_data, s32 res, u32 cflags)
>> {
>>         struct io_ring_ctx *ctx;
>>         ctx = file->private_data;
>>
>>         spin_lock(&ctx->completion_lock);
>>         __io_fill_cqe(ctx, user_data, res, cflags);
>>         io_commit_cqring(ctx);
>>         spin_unlock(&ctx->completion_lock);
>>         io_cqring_ev_posted(ctx);
>> }
>>
>> Then in ubd driver:
>> 1) device setup stage
>> We attach io_uring files and user_data to every ubd hard queue.
>>
>> 2) when blk-mq->queue_rq() is called.
>> io_gen_cqe_direct() will be called in ubd's queue_rq, and we put ubd io request's
>> qid and tag info into cqe's res field, then we don't need to issue sqe to fetch io cmds.
> The above way is actually anti io_uring design, and I don't think it may
> improve much since submitting UBD_IO_COMMIT_AND_FETCH_REQ is pretty lightweight.
Actually I don't come up with this idea mostly for performance reason :) Just try to
simplify codes a bit:
1) In current implementation, ubdsrv will need to submit queue depth number of
sqes firstly, and ubd_ctrl_start_dev() will also need to wait all sqes to be submitted.
2) Try to make ubd_queue_rq simpler, it maybe just call one io_gen_cqe_direct().

>
> Also without re-submitting UBD_IO_COMMIT_AND_FETCH_REQ command, how can you
> commit io handling result from ubd server and ask ubd driver to complete
> io request?
No, I don't mean to remove COMMIT command, we still need io_uring async
command feature to support ubd COMMIT or GETDATA command.

I have another concern that currently there are may flags in ubd kernel or
ubdsrv, such as:
#define UBDSRV_NEED_FETCH_RQ (1UL << 0)
#define UBDSRV_NEED_COMMIT_RQ_COMP (1UL << 1)
#define UBDSRV_IO_FREE (1UL << 2)
#define UBDSRV_IO_HANDLING (1UL << 3)
#define UBDSRV_NEED_GET_DATA (1UL << 4)

Some of their names looks weird, for example UBDSRV_IO_FREE. I think
more flags may result in more state machine error.

Regards,
Xiaoguang Wang
>
>
> Thanks, 
> Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux