Re: [RFC] io_uring: add restrictions to support untrusted applications and guests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 16, 2020 at 01:32:54PM +0200, Jann Horn wrote:
> On Tue, Jun 16, 2020 at 11:13 AM Stefano Garzarella <sgarzare@xxxxxxxxxx> wrote:
> > On Mon, Jun 15, 2020 at 11:00:25AM -0600, Jens Axboe wrote:
> > > On 6/15/20 7:33 AM, Stefano Garzarella wrote:
> > > > On Mon, Jun 15, 2020 at 11:04:06AM +0200, Jann Horn wrote:
> > > >> +Kees, Christian, Sargun, Aleksa, kernel-hardening for their opinions
> > > >> on seccomp-related aspects
> > > >>
> > > >> On Tue, Jun 9, 2020 at 4:24 PM Stefano Garzarella <sgarzare@xxxxxxxxxx> wrote:
> > > >>> Hi Jens,
> > > >>> Stefan and I have a proposal to share with io_uring community.
> > > >>> Before implementing it we would like to discuss it to receive feedbacks and
> > > >>> to see if it could be accepted:
> > > >>>
> > > >>> Adding restrictions to io_uring
> > > >>> =====================================
> > > >>> The io_uring API provides submission and completion queues for performing
> > > >>> asynchronous I/O operations. The queues are located in memory that is
> > > >>> accessible to both the host userspace application and the kernel, making it
> > > >>> possible to monitor for activity through polling instead of system calls. This
> > > >>> design offers good performance and this makes exposing io_uring to guests an
> > > >>> attractive idea for improving I/O performance in virtualization.
> > > >> [...]
> > > >>> Restrictions
> > > >>> ------------
> > > >>> This document proposes io_uring API changes that safely allow untrusted
> > > >>> applications or guests to use io_uring. io_uring's existing security model is
> > > >>> that of kernel system call handler code. It is designed to reject invalid
> > > >>> inputs from host userspace applications. Supporting guests as io_uring API
> > > >>> clients adds a new trust domain with access to even fewer resources than host
> > > >>> userspace applications.
> > > >>>
> > > >>> Guests do not have direct access to host userspace application file descriptors
> > > >>> or memory. The host userspace application, a Virtual Machine Monitor (VMM) such
> > > >>> as QEMU, grants access to a subset of its file descriptors and memory. The
> > > >>> allowed file descriptors are typically the disk image files belonging to the
> > > >>> guest. The memory is typically the virtual machine's RAM that the VMM has
> > > >>> allocated on behalf of the guest.
> > > >>>
> > > >>> The following extensions to the io_uring API allow the host application to
> > > >>> grant access to some of its file descriptors.
> > > >>>
> > > >>> These extensions are designed to be applicable to other use cases besides
> > > >>> untrusted guests and are not virtualization-specific. For example, the
> > > >>> restrictions can be used to allow only a subset of sqe operations available to
> > > >>> an application similar to seccomp syscall whitelisting.
> > > >>>
> > > >>> An address translation and memory restriction mechanism would also be
> > > >>> necessary, but we can discuss this later.
> > > >>>
> > > >>> The IOURING_REGISTER_RESTRICTIONS opcode
> > > >>> ----------------------------------------
> > > >>> The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode permanently
> > > >>> installs a feature whitelist on an io_ring_ctx. The io_ring_ctx can then be
> > > >>> passed to untrusted code with the knowledge that only operations present in the
> > > >>> whitelist can be executed.
> > > >>
> > > >> This approach of first creating a normal io_uring instance and then
> > > >> installing restrictions separately in a second syscall means that it
> > > >> won't be possible to use seccomp to restrict newly created io_uring
> > > >> instances; code that should be subject to seccomp restrictions and
> > > >> uring restrictions would only be able to use preexisting io_uring
> > > >> instances that have already been configured by trusted code.
> > > >>
> > > >> So I think that from the seccomp perspective, it might be preferable
> > > >> to set up these restrictions in the io_uring_setup() syscall. It might
> > > >> also be a bit nicer from a code cleanliness perspective, since you
> > > >> won't have to worry about concurrently changing restrictions.
> > > >>
> > > >
> > > > Thank you for these details!
> > > >
> > > > It seems feasible to include the restrictions during io_uring_setup().
> > > >
> > > > The only doubt concerns the possibility of allowing the trusted code to
> > > > do some operations, before passing queues to the untrusted code, for
> > > > example registering file descriptors, buffers, eventfds, etc.
> > > >
> > > > To avoid this, I should include these operations in io_uring_setup(),
> > > > adding some code that I wanted to avoid by reusing io_uring_register().
> > > >
> > > > If I add restrictions in io_uring_setup() and then add an operation to
> > > > go into safe mode (e.g. a flag in io_uring_enter()), we would have the same
> > > > problem, right?
> > > >
> > > > Just to be clear, I mean something like this:
> > > >
> > > >     /* params will include restrictions */
> > > >     fd = io_uring_setup(entries, params);
> > > >
> > > >     /* trusted code */
> > > >     io_uring_register_files(fd, ...);
> > > >     io_uring_register_buffers(fd, ...);
> > > >     io_uring_register_eventfd(fd, ...);
> > > >
> > > >     /* enable safe mode */
> > > >     io_uring_enter(fd, ..., IORING_ENTER_ENABLE_RESTRICTIONS);
> > > >
> > > >
> > > > Anyway, including a list of things to register in the 'params', passed
> > > > to io_uring_setup(), should be feasible, if Jens agree :-)
> > >
> > > I wonder how best to deal with this, in terms of ring visibility vs
> > > registering restrictions. We could potentially start the ring in a
> > > disabled mode, if asked to. It'd still be visible in terms of having
> > > the fd installed, but it'd just error requests. That'd leave you with
> > > time to do the various setup routines needed before then flagging it
> > > as enabled. My only worry on that would be adding overhead for doing
> > > that. It'd be cheap enough to check for IORING_SETUP_DISABLED in
> > > ctx->flags in io_uring_enter(), and return -EBADFD or something if
> > > that's the case. That doesn't cover the SQPOLL case though, but maybe we
> > > just don't start the sq thread if IORING_SETUP_DISABLED is set.
> >
> > It seems to me a very good approach and easy to implement. In this way
> > we can reuse io_uring_register() without having to modify too much
> > io_uring_setup().
> >
> > >
> > > We'd need a way to clear IORING_SETUP_DISABLED through
> > > io_uring_register(). When clearing, that could then start the sq thread
> > > as well, when SQPOLL is set.
> >
> > Could we do it using io_uring_enter() since we have a flag field or
> > do you think it's semantically incorrect?
> >
> > @Jann, do you think this could work with seccomp?
> 
> To clarify that I understood your proposal correctly: Is the idea to
> have two types of mostly orthogonal restrictions; one type being
> restrictions on the opcode (supplied in io_uring_setup() and enforced
> immediately) and the other type being restrictions on
> io_uring_register() (enabled via IORING_ENTER_ENABLE_RESTRICTIONS)?

Slightly different. The idea is to start the ring in a disabled mode,
where all submission ops are not allowed.

In this way the trusted code can do the various setups (e.g. using
io_uring_register() to register fd, buffers, restrictions, etc.).
When the setup phase is finished, the trusted code can enable the ring
using io_uring_register() or io_uring_enter() with a special flag.

After this last syscall, submissions are enabled and restricted
(if restrictions have been registered).

I hope that's a little bit clearer. I'm sorry it's not.

Stefano




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux