On Thu, Apr 28, 2016 at 11:36:44AM -0300, Gustavo Padovan wrote: > 2016-04-27 Daniel Stone <daniel@xxxxxxxxxxxxx>: > > > Hi, > > > > On 26 April 2016 at 21:48, Greg Hackmann <ghackmann@xxxxxxxxxx> wrote: > > > On 04/26/2016 01:05 PM, Daniel Vetter wrote: > > >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote: > > >>> What are they doing that can't stuff the fences into an array > > >>> instead of props? > > >> > > >> The hw composer interface is one in-fence per plane. That's really the > > >> major reason why the kernel interface is built to match. And I really > > >> don't think we should diverge just because we have a slight different > > >> color preference ;-) > > > > > > The relationship between layers and fences is only fuzzy and indirect > > > though. The relationship is really between the buffer you're displaying on > > > that layer, and the fence representing the work done to render into that > > > buffer. SurfaceFlinger just happens to bundle them together inside the same > > > struct hwc_layer_1 as an API convenience. > > > > Right, and when using implicit fencing, this comes as a plane > > property, by virtue of plane -> fb -> buffer -> fence. > > > > > Which is kind of splitting hairs as long as you have a 1-to-1 relationship > > > between layers and DRM planes. But that's not always the case. > > > > Can you please elaborate? > > > > > A (per-CRTC?) array of fences would be more flexible. And even in the cases > > > where you could make a 1-to-1 mapping between planes and fences, it's not > > > that much more work for userspace to assemble those fences into an array > > > anyway. > > > > As Ville says, I don't want to go down the path of scheduling CRTC > > updates separately, because that breaks MST pretty badly. If you don't > > want your updates to display atomically, then don't schedule them > > atomically ... ? That's the only reason I can see for making fencing > > per-CRTC, rather than just a pile of unassociated fences appended to > > the request. Per-CRTC fences also forces userspace to merge fences > > before submission when using multiple planes per CRTC, which is pretty > > punitive. > > > > I think having it semantically attached to the plane is a little bit > > nicer for tracing (why was this request delayed? -> a fence -> which > > buffer was that fence for?) at a glance. Also the 'pile of appended > > fences' model is a bit awkward for more generic userspace, which > > creates a libdrm request and builds it (add a plane, try it out, wind > > back) incrementally. Using properties makes that really easy, but > > without properties, we'd have to add separate codepaths - and thus > > separate ABI, which complicates distribution - to libdrm to account > > for a separate plane array which shares a cursor with the properties. > > So for that reason if none other, I'd really prefer not to go down > > that route. > > I also agree to have it as FENCE_FD prop on the plane. Summarizing the > arguments on this thread, they are: Your "summary" forgot to include any counter arguments. > > - implicit fences also needs one fence per plane/fb, so it will be good to > match with that. We would actually need a fence per object rather than per fb. > - requires userspace to always merge fences "doesn't?" but that's not true if it's an array. It would be true if you had just one fence for the whole thing, or one per crtc. > - can use standard plane properties, making kernel and userspace life easier, > an array brings more work to build the atomic request plus extra checkings > on the kernel. I don't really get this one. The objects and props are arrays too. Why is another array so problematic? > - do not need to changes to drivers > - better for tracing, can identify the buffer/fence promptly Can fences be reused somehow while still attached to a plane, or ever? That might cause some oddness if you, say, leave a fence attached to one plane and then do a modeset on another crtc perhaps which needs to turn the first crtc off+on to reconfigure something. -- Ville Syrjälä Intel OTC _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel