Re: [PATCH v1 1/4] io_uring: only account cqring wait time as iowait if enabled for a ring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-02-24 07:31, Pavel Begunkov wrote:
> On 2/24/24 05:07, David Wei wrote:
>> Currently we unconditionally account time spent waiting for events in CQ
>> ring as iowait time.
>>
>> Some userspace tools consider iowait time to be CPU util/load which can
>> be misleading as the process is sleeping. High iowait time might be
>> indicative of issues for storage IO, but for network IO e.g. socket
>> recv() we do not control when the completions happen so its value
>> misleads userspace tooling.
>>
>> This patch gates the previously unconditional iowait accounting behind a
>> new IORING_REGISTER opcode. By default time is not accounted as iowait,
>> unless this is explicitly enabled for a ring. Thus userspace can decide,
>> depending on the type of work it expects to do, whether it wants to
>> consider cqring wait time as iowait or not.
> 
> I don't believe it's a sane approach. I think we agree that per
> cpu iowait is a silly and misleading metric. I have hard time to
> define what it is, and I'm sure most probably people complaining
> wouldn't be able to tell as well. Now we're taking that metric
> and expose even more knobs to userspace.
> 
> Another argument against is that per ctx is not the right place
> to have it. It's a system metric, and you can imagine some system
> admin looking for it. Even in cases when had some meaning w/o
> io_uring now without looking at what flags io_uring has it's
> completely meaningless, and it's too much to ask.> 
> I don't understand why people freak out at seeing hi iowait,
> IMHO it perfectly fits the definition of io_uring waiting for
> IO / completions, but at this point it might be better to just
> revert it to the old behaviour of not reporting iowait at all.

Irrespective of how misleading iowait is, many tools include it in its
CPU util/load calculations and users then use those metrics for e.g.
load balancing. In situations with storage workloads, iowait can be
useful even if its usefulness is limited. The problem that this patch is
trying to resolve is in mixed storage/network workloads on the same
system, where iowait has some usefulness (due to storage workloads)
_but_ I don't want network workloads contributing to the metric.

This does put the onus on userspace to do the right thing - decide
whether iowait makes sense for a workload or not. I don't have enough
kernel experience to know whether this expectation is realistic or not.
But, it is turned off by default so if userspace does not set it (which
seems like the most likely thing) then iowait accounting is off just
like the old behaviour. Perhaps we need to make it clearer to storage
use-cases to turn it on in order to get the optimisation?

> And if we want to save the cpu freq iowait optimisation, we
> should just split notion of iowait reporting and iowait cpufreq
> tuning.

Yeah, that could be an option. I'll take a look at it.

> 
> 
>> Signed-off-by: David Wei <dw@xxxxxxxxxxx>
>> ---
>>   include/linux/io_uring_types.h |  1 +
>>   include/uapi/linux/io_uring.h  |  3 +++
>>   io_uring/io_uring.c            |  9 +++++----
>>   io_uring/register.c            | 17 +++++++++++++++++
>>   4 files changed, 26 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
>> index bd7071aeec5d..c568e6b8c9f9 100644
>> --- a/include/linux/io_uring_types.h
>> +++ b/include/linux/io_uring_types.h
>> @@ -242,6 +242,7 @@ struct io_ring_ctx {
>>           unsigned int        drain_disabled: 1;
>>           unsigned int        compat: 1;
>>           unsigned int        iowq_limits_set : 1;
>> +        unsigned int        iowait_enabled: 1;
>>             struct task_struct    *submitter_task;
>>           struct io_rings        *rings;
>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
>> index 7bd10201a02b..b068898c2283 100644
>> --- a/include/uapi/linux/io_uring.h
>> +++ b/include/uapi/linux/io_uring.h
>> @@ -575,6 +575,9 @@ enum {
>>       IORING_REGISTER_NAPI            = 27,
>>       IORING_UNREGISTER_NAPI            = 28,
>>   +    /* account time spent in cqring wait as iowait */
>> +    IORING_REGISTER_IOWAIT            = 29,
>> +
>>       /* this goes last */
>>       IORING_REGISTER_LAST,
>>   diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index cf2f514b7cc0..7f8d2a03cce6 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -2533,12 +2533,13 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
>>           return 0;
>>         /*
>> -     * Mark us as being in io_wait if we have pending requests, so cpufreq
>> -     * can take into account that the task is waiting for IO - turns out
>> -     * to be important for low QD IO.
>> +     * Mark us as being in io_wait if we have pending requests if enabled
>> +     * via IORING_REGISTER_IOWAIT, so cpufreq can take into account that
>> +     * the task is waiting for IO - turns out to be important for low QD
>> +     * IO.
>>        */
>>       io_wait = current->in_iowait;
>> -    if (current_pending_io())
>> +    if (ctx->iowait_enabled && current_pending_io())
>>           current->in_iowait = 1;
>>       ret = 0;
>>       if (iowq->timeout == KTIME_MAX)
>> diff --git a/io_uring/register.c b/io_uring/register.c
>> index 99c37775f974..fbdf3d3461d8 100644
>> --- a/io_uring/register.c
>> +++ b/io_uring/register.c
>> @@ -387,6 +387,17 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
>>       return ret;
>>   }
>>   +static int io_register_iowait(struct io_ring_ctx *ctx, int val)
>> +{
>> +    int was_enabled = ctx->iowait_enabled;
>> +
>> +    if (val)
>> +        ctx->iowait_enabled = 1;
>> +    else
>> +        ctx->iowait_enabled = 0;
>> +    return was_enabled;
>> +}
>> +
>>   static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>>                      void __user *arg, unsigned nr_args)
>>       __releases(ctx->uring_lock)
>> @@ -563,6 +574,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>>               break;
>>           ret = io_unregister_napi(ctx, arg);
>>           break;
>> +    case IORING_REGISTER_IOWAIT:
>> +        ret = -EINVAL;
>> +        if (arg)
>> +            break;
>> +        ret = io_register_iowait(ctx, nr_args);
>> +        break;
>>       default:
>>           ret = -EINVAL;
>>           break;
> 




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux