On Mon, 12 Feb 2018 12:45:40 +0100 Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote: > Hi, > > > > (a) software rendering: client allocates shared memory buffer, renders > > > into it, then passes a file handle for that shmem block together > > > with some meta data (size, format, ...) to the wayland server. > > > > > > (b) gpu rendering: client opens a render node, allocates a buffer, > > > asks the cpu to renders into it, exports the buffer as dma-buf > > > (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server > > > (again including meta data of course). > > > > > > Is that correct? > > > > Both are correct descriptions of typical behaviors. But it isn't spec'ed > > anywhere who has to do the buffer allocation. > > Well, according to Pekka's reply it is spec'ed that way, for the > existing buffer types. So for server allocated buffers you need > (a) a wayland protocol extension and (b) support for the extension > in the clients. Correct. Or simply a libEGL that uses such Wayland extension behind everyone's back. I believe such things did at least exist, but are probably not relevant for this discussion. (If there is a standard library, like libEGL, loaded and used by both a server and a client, that library can advertise custom private Wayland protocol extensions and the client side can take advantage of them, both without needing any code changes on either the server or the client.) > We also need a solution for the keymap shmem block. I guess the keymap > doesn't change all that often, so maybe it is easiest to just copy it > over (host proxy -> guest proxy) instead of trying to map the host shmem > into the guest? Yes, I believe that would be a perfectly valid solution for that particular case. Thanks, pq
Attachment:
pgpRMoWrxUflM.pgp
Description: OpenPGP digital signature
_______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization