Re: [RFC PATCH 0/8] *** Per context fencing ***

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Mar 10, 2020 at 12:43 AM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
On Mon, Mar 09, 2020 at 06:08:10PM -0700, Gurchetan Singh wrote:
> We don't want fences from different 3D contexts/processes (GL, VK) to
> be on the same timeline. Sending this out as a RFC to solicit feedback
> on the general approach.

NACK.

virtio fences are global, period.  You can't simply change fence
semantics like this.  At least not without a virtio protocol update
because guest and host need to be on the same page here. 

I should've been more clear -- this is an internal cleanup/preparation and the per-context changes are invisible to host userspace.

Just look at
virgl_renderer_callbacks->write_fences() and how it doesn't consider
contexts at all.

So one way out would be to add a virtio feature flag for this, so guest
& host can negotiate whenever fences are global or per-context.

Yeah, we'll need something like VIRTIO_GPU_FLAG_FENCE_V2 eventually, which means fences virgl_write_fence can not assume fence_id is the global sequence number.  It's a bit tricky, and we have to consider a few cases:  

1) Current kernel/current host userspace.  Everything works.

2) Newer kernel (with this series) on current host userspace.  For that, fence_id needs to be a monotonically increasing value, which it remains to be.  I ran the standard test apps (Unigine Valley, dEQP, etc.) with this change and things seem fine.

3) Current kernel on newer host userspace.  New host won't see VIRTIO_GPU_FLAG_FENCE_V2, everything should work as usual.

4) Newer kernel on new host host userspace.  virgl_write_fence signals fences only in a specific context (or possibly only one fence at a time).  The guest kernel processing based on {fence_id, fence_context} will make a difference in a multi-process environment.

If I have things right (and again, it's a bit tricky), so the virtio protocol update will be required at (4).  It would be nice to get in refactorings to avoid mega-changes if we agree on the general approach..

Side note:

Fences do have an associated context ID in virglrenderer [1], though we don't pass down the correct ctx ID just yet [2].

[1] https://gitlab.freedesktop.org/virgl/virglrenderer/-/blob/master/src/virglrenderer.h#L204
[2] https://github.com/qemu/qemu/blob/master/hw/display/virtio-gpu-3d.c#L490

Another approach would be to go multiqueue, with each virtqueue being
roughly the same as a rendering pipeline on physical hardware, then have
per-virtqueue fences.

Multi-queue sounds very interesting indeed, especially with VK multi-threaded command submission.  That to me is V3 rather than V2.. let's start easy!

When going multiqueue we might also rethink the cursor queue approach.
I think it makes sense to simply allow sending cursor commands to any
queue then.  A separate cursor queue continues to be an option for the
guest then, but I'm not sure how useful that actually is in practice
given that cursor image updates are regular resource updates and have to
go through the control queue, so virtio_gpu_cursor_plane_update() has to
wait for the resource update finish before it can queue the cursor
command.  I suspect the idea to fast-track cursor updates with a
separate queue doesn't work that well in practice because of that.

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux