Resizing io_uring SQ/CQ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
For block I/O an application can queue excess SQEs in userspace when the
SQ ring becomes full. For network and IPC operations that is not
possible because deadlocks can occur when socket, pipe, and eventfd SQEs
cannot be submitted.

Sometimes the application does not know how many SQEs/CQEs are needed upfront
and that's when we face this challenge.

A simple solution is to call io_uring_setup(2) with a higher entries
value than you'll ever need. However, if that value is exceeded then
we're back to the deadlock scenario and that worries me.

I've thought about userspace solutions like keeping a list of io_uring
contexts where a new io_uring context is created and inserted at the
head every time a resize is required. New SQEs are only submitted to the
head io_uring context. The older io_uring contexts are drained until the
CQ ring is empty and then destroyed. But this seems complex to me.

Another idea is a new io_uring_register(2) IORING_REGISTER_RING_SIZE
opcode:
1. Userspace ensures that the kernel has seen all SQEs in the SQ ring.
2. Userspace munmaps the ring fd.
3. Userspace calls io_uring_register(2) IORING_REGISTER_RING_SIZE with the new size.
4. The kernel allocates the new ring.
5. The kernel copies over CQEs that userspace has not consumed from the
   old CQ ring to the new one.
6. The io_uring_register(2) syscall returns.
7. Userspace mmaps the fd again.

How do you deal with changing ring size at runtime?

Thanks,
Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux