Re: [PATCH v3 1/2] drm/virtio: Add window server support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hi,

> > Hmm?  I'm assuming the wayland client (in the guest) talks to the
> > wayland proxy, using the wayland protocol, like it would talk to a
> > wayland display server.  Buffers must be passed from client to
> > server/proxy somehow, probably using fd passing, so where is the
> > problem?
> > 
> > Or did I misunderstand the role of the proxy?
> 
> Hi Gerd,
> 
> it's starting to look to me that we're talking a bit past the other, so I
> have pasted below a few words describing my current plan regarding the 3 key
> scenarios that I'm addressing.

You are describing the details, but I'm missing the big picture ...

So, virtualization aside, how do buffers work in wayland?  As far I know
it goes like this:

  (a) software rendering: client allocates shared memory buffer, renders
      into it, then passes a file handle for that shmem block together
      with some meta data (size, format, ...) to the wayland server.

  (b) gpu rendering: client opens a render node, allocates a buffer,
      asks the cpu to renders into it, exports the buffer as dma-buf
      (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
      (again including meta data of course).

Is that correct?

Now, with virtualization added to the mix it becomes a bit more
complicated.  Client and server are unmodified.  The client talks to the
guest proxy (wayland protocol).  The guest proxy talks to the host proxy
(protocol to be defined). The host proxy talks to the server (wayland
protocol).

Buffers must be managed along the way, and we want avoid copying around
the buffers.  The host proxy could be implemented directly in qemu, or
as separate process which cooperates with qemu for buffer management.

Fine so far?

> I really think that whatever we come up with needs to support 3D clients as
> well.

Lets start with 3d clients, I think these are easier.  They simply use
virtio-gpu for 3d rendering as usual.  When they are done the rendered
buffer already lives in a host drm buffer (because virgl runs the actual
rendering on the host gpu).  So the client passes the dma-buf to the
guest proxy, the guest proxy imports it to look up the resource-id,
passes the resource-id to the host proxy, the host proxy looks up the
drm buffer and exports it as dma-buf, then passes it to the server.
Done, without any extra data copies.

> Creation of shareable buffer by guest
> -------------------------------------------------
> 
> 1. Client requests virtio driver to create a buffer suitable for sharing
> with host (DRM_VIRTGPU_RESOURCE_CREATE)

client or guest proxy?

> 4. QEMU maps that buffer to the guest's address space
> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver

That part is problematic.  The host can't simply allocate something in
the physical address space, because most physical address space
management is done by the guest.  All pci bars are mapped by the guest
firmware for example (or by the guest OS in case of hotplug).

> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
> resource, sends data + FDs to the compositor with SCM_RIGHTS

BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
does the wayland protocol allow for offsets in buffer meta data, so you
can place multiple buffers in a single shmem block?

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux