Re: passing FDs across domains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gerd,

Sorry for late reply. It was a crazy two weeks.

On Tue, Apr 2, 2019 at 1:19 AM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>
>   Hi,
>
> > > Camera was mentioned too.
> >
> > Right, forgot to mention it.
> >
> > Actually for cameras it gets complicated even if we put the buffer
> > sharing aside. The key point is that on modern systems, which need
> > more advanced camera capabilities than a simple UVC webcam, the camera
> > is in fact a whole subsystem of hardware components, i.e. sensors,
> > lenses, raw capture I/F and 1 or more ISPs. Currently the only
> > relatively successful way of standardizing the way to control those is
> > the Android Camera HALv3 API, which is a relatively complex userspace
> > interface. Getting feature parity, which is crucial for the use cases
> > Chrome OS is targeting, is going to require quite a sophisticated
> > interface between the host and guest.
>
> Sounds tricky indeed, especially the signal processor part.
> Any plans already how to tackle that?
>

There are two possible approaches here:

1) a platform specific one - in our case we already have a camera
service in the host that exposes a sort of IPC (Mojo) and we already
have clients talking that IPC and we're just thinking about moving
some of those clients into virtual machines. Since the IPC usability
is just limited to our host and our guests, it doesn't make much sense
to abstract it and so I mentioned the generic IPC pass through before.

2) an abstract one - modelling the camera from the high level
perspective, but without exposing too much detail. Basically we would
want to expose all the functionality and not all the hardware
topology. I would need to think a bit more about this, but the general
idea would be to expose logical cameras that can provide a number of
video streams with certain parameters. That would generally match the
functionality of the Android Camera HALv3 API, which is currently the
most functional and standard camera API used in the consumer market,
but without imposing the API details on the virtio interface.

> > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > various platform handles (e.g. DMA-bufs). The clients exchange
> > DMA-bufs with the service.
>
> Only dma-bufs?
>

Mojo is just a framework that can serialize things and pass various
objects around. What is being passed depends on the particular
interface.

For the camera use case that would be DMA-bufs and fences.

We also have some more general use cases where we actually pass files,
sockets and other objects there. They can be easily handled with a
userspace proxy, though. Not very efficiently, but that's not a
requirement for our use cases.

> Handling dma-bufs looks doable without too much trouble to me.  guest ->
> host can pass a scatter list, host -> guest can map the buffer into
> guest address space using the new shared memory support which is planned
> to be added to virtio (for virtio-fs, and virtio-gpu will most likely
> use that too).
>

In some of our cases we would preallocate those buffers via
virtio-gpu, since they would be later used in the GPU or display
pipeline. In this case, sending a virtio-gpu handle sounds more
straightforward.

> > > >  - crypto hardware accelerators.
> > >
> > > Note: there is virtio-crypto.
> >
> > Thanks, that's a useful pointer.
> >
> > One more aspect is that the nature of some data may require that only
> > the host can access the decrypted data.
>
> What is the use case?  Playback drm-encrypted media, where the host gpu
> handles decryption?
>

Correct.

> > > One problem with sysv shm is that you can resize buffers.  Which in turn
> > > is the reason why we have memfs with sealing these days.
> >
> > Indeed shm is a bit problematic. However, passing file descriptors of
> > pipe-like objects or regular files could be implemented with a
> > reasonable amount of effort, if some performance trade-offs are
> > acceptable.
>
> Pipes could just create a new vsock stream and use that as transport.

Right.

>
> Any ideas or plans for files?
>

This is a very interesting problem and also depends on the direction
of transfer.

Host -> guest should be relatively easy, as reads/writes could go
inline, while guest side mmap could rely on the shared memory to map
the host file mapping into the guest, although I'm not sure how that
would play with the host fs/block subsystems, read aheads, write backs
and so on...

Guest -> host would be more complicated. In the simplest approach one
could just push the data inline and expose the files locally via FUSE.
Not sure if any memory sharing can be reasonably implemented here, due
to guest side fs/block not aware of the host accessing its memory...

> > > Third: Any plan for passing virtio-gpu resources to the host side when
> > > running wayland over virtio-vsock?  With dumb buffers it's probably not
> > > much of a problem, you can grab a list of pages and run with it.  But
> > > for virgl-rendered resources (where the rendered data is stored in a
> > > host texture) I can't see how that will work without copying around the
> > > data.
> >
> > I think it could work the same way as with the virtio-gpu window
> > system pipe being proposed in another thread. The guest vsock driver
> > would figure out that the FD the userspace is trying to pass points to
> > a virtio-gpu resource, convert that to some kind of a resource handle
> > (or descriptor) and pass that to the host. The host vsock
> > implementation would then resolve the resource handle (descriptor)
> > into an object that can be represented as a host file descriptor
> > (DMA-buf?).
>
> Well, when adding wayland stream support to virtio-gpu this is easy.
>
> When using virtio-vsock streams with SCM_RIGHTS this will need some
> cross-driver coordination between virtio-vsock and virtio-gpu on both
> guest and host side.
>
> Possibly such cross-driver coordination is useful for other cases
> too.  virtio-vsock and virtio-fs could likewise work together to allow
> pass-through of handles for regular files.
>

That would also be the case for the virtio-vdec (video decoder) we're
working on.

> > I'd expect that buffers that are used for Wayland surfaces
> > would be more than just a regular GL(ES) texture, since the compositor
> > and virglrenderer would normally be different processes, with the
> > former not having any idea of the latter's textures.
>
> wayland client export the egl frontbuffer as dma-buf.
>

I guess that's also one option. In that case that buffer would come
from the virtio-gpu driver already, right? Now the question is whether
it had the right bind flags set at allocation time, but given that
it's a front buffer, it should. Then it boils down to the same case as
buffers allocated explicitly from virtio-gpu and imported to EGL (via
EGLimage), just the allocation flow changes.

> > By the way, are you perhaps planning to visit the Open Source Summit
> > Japan in July [1]?
>
> No.

Got it. I submitted a CFP about handling multimedia use cases inside
VMs (not approved yet), so thought it could be a good chance to
discuss things. Still, I'd hope we move forward well enough to not
have much need to discuss anymore before July. ;)

Best regards,
Tomasz



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux