Hi, > > Camera was mentioned too. > > Right, forgot to mention it. > > Actually for cameras it gets complicated even if we put the buffer > sharing aside. The key point is that on modern systems, which need > more advanced camera capabilities than a simple UVC webcam, the camera > is in fact a whole subsystem of hardware components, i.e. sensors, > lenses, raw capture I/F and 1 or more ISPs. Currently the only > relatively successful way of standardizing the way to control those is > the Android Camera HALv3 API, which is a relatively complex userspace > interface. Getting feature parity, which is crucial for the use cases > Chrome OS is targeting, is going to require quite a sophisticated > interface between the host and guest. Sounds tricky indeed, especially the signal processor part. Any plans already how to tackle that? > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix > socket, relying on file descriptor passing (SCM_RIGHTS) for passing > various platform handles (e.g. DMA-bufs). The clients exchange > DMA-bufs with the service. Only dma-bufs? Handling dma-bufs looks doable without too much trouble to me. guest -> host can pass a scatter list, host -> guest can map the buffer into guest address space using the new shared memory support which is planned to be added to virtio (for virtio-fs, and virtio-gpu will most likely use that too). > > > - crypto hardware accelerators. > > > > Note: there is virtio-crypto. > > Thanks, that's a useful pointer. > > One more aspect is that the nature of some data may require that only > the host can access the decrypted data. What is the use case? Playback drm-encrypted media, where the host gpu handles decryption? > > One problem with sysv shm is that you can resize buffers. Which in turn > > is the reason why we have memfs with sealing these days. > > Indeed shm is a bit problematic. However, passing file descriptors of > pipe-like objects or regular files could be implemented with a > reasonable amount of effort, if some performance trade-offs are > acceptable. Pipes could just create a new vsock stream and use that as transport. Any ideas or plans for files? > > Third: Any plan for passing virtio-gpu resources to the host side when > > running wayland over virtio-vsock? With dumb buffers it's probably not > > much of a problem, you can grab a list of pages and run with it. But > > for virgl-rendered resources (where the rendered data is stored in a > > host texture) I can't see how that will work without copying around the > > data. > > I think it could work the same way as with the virtio-gpu window > system pipe being proposed in another thread. The guest vsock driver > would figure out that the FD the userspace is trying to pass points to > a virtio-gpu resource, convert that to some kind of a resource handle > (or descriptor) and pass that to the host. The host vsock > implementation would then resolve the resource handle (descriptor) > into an object that can be represented as a host file descriptor > (DMA-buf?). Well, when adding wayland stream support to virtio-gpu this is easy. When using virtio-vsock streams with SCM_RIGHTS this will need some cross-driver coordination between virtio-vsock and virtio-gpu on both guest and host side. Possibly such cross-driver coordination is useful for other cases too. virtio-vsock and virtio-fs could likewise work together to allow pass-through of handles for regular files. > I'd expect that buffers that are used for Wayland surfaces > would be more than just a regular GL(ES) texture, since the compositor > and virglrenderer would normally be different processes, with the > former not having any idea of the latter's textures. wayland client export the egl frontbuffer as dma-buf. > By the way, are you perhaps planning to visit the Open Source Summit > Japan in July [1]? No. cheers, Gerd