Dmitry Antipov <dmantipov@xxxxxxxxx> writes: > In 'io_cqring_schedule_timeout()', do not assume that 'ktime_t' is > equal to nanoseconds and prefer 'ktime_add()' over 'ktime_add_ns()' > to sum two 'ktime_t' values. Compile tested only. > > Fixes: 1100c4a2656d ("io_uring: add support for batch wait timeout") > Signed-off-by: Dmitry Antipov <dmantipov@xxxxxxxxx> > --- > io_uring/io_uring.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > index ceacf6230e34..7f2500aca95c 100644 > --- a/io_uring/io_uring.c > +++ b/io_uring/io_uring.c > @@ -2434,7 +2434,7 @@ static int io_cqring_schedule_timeout(struct io_wait_queue *iowq, > ktime_t timeout; > > if (iowq->min_timeout) { > - timeout = ktime_add_ns(iowq->min_timeout, start_time); > + timeout = ktime_add(iowq->min_timeout, start_time); I don't think this solves the issue stated in the commit message. Look at where the min_timeout comes from, in io_get_ext_arg: ext_arg->min_time = READ_ONCE(w->min_wait_usec) * NSEC_PER_USEC; Perhaps that should be: ext_arg->min_time = us_to_ktime(READ_ONCE(w->min_wait_usec)); I also don't know whether this warrants a fixes tag, given it doesn't change any behavior. Cheers, Jeff