On Thu, Aug 9, 2018 at 4:54 PM, Christian König <christian.koenig@xxxxxxx> wrote: > Am 09.08.2018 um 16:22 schrieb Daniel Vetter: >> >> On Thu, Aug 9, 2018 at 3:58 PM, Christian König >> <ckoenig.leichtzumerken@xxxxxxxxx> wrote: >>> >>> Am 09.08.2018 um 15:38 schrieb Daniel Vetter: >>>> >>>> On Thu, Aug 09, 2018 at 01:37:07PM +0200, Christian König wrote: >>>> [SNIP] >>> >>> See to me the explicit fence in the reservation object is not even >>> remotely >>> related to implicit or explicit synchronization. >> >> Hm, I guess that's the confusion then. The only reason we have the >> exclusive fence is to implement cross-driver implicit syncing. What >> else you do internally in your driver doesn't matter, as long as you >> keep up that contract. >> >> And it's intentionally not called write_fence or anything like that, >> because that's not what it tracks. >> >> Of course any buffer moves the kernel does also must be tracked in the >> exclusive fence, because userspace cannot know about these. So you >> might have an exclusive fence set and also an explicit fence passed in >> through the atomic ioctl. Aside: Right now all drivers only observe >> one or the other, not both, so will break as soon as we start moving >> shared buffers around. At least on Android or anything else using >> explicit fencing. > > > Actually both radeon and nouveau use the approach that shared fences need to > wait on as well when they don't come from the current driver. > >> >> So here's my summary, as I understanding things right now: >> - for non-shared buffers at least, amdgpu uses explicit fencing, and >> hence all fences caused by userspace end up as shared fences, whether >> that's writes or reads. This means you end up with possibly multiple >> write fences, but never any exclusive fences. >> - for non-shared buffers the only exclusive fences amdgpu sets are for >> buffer moves done by the kernel. >> - amgpu (kernel + userspace combo here) does not seem to have a >> concept/tracking for when a buffer is used with implicit or explicit >> fencing. It does however track all writes. > > > No, that is incorrect. It tracks all accesses to a buffer object in the form > of shared fences, we don't care if it is a write or not. > > What we track as well is which client uses a BO last and as long as the same > client uses the BO we don't add any implicit synchronization. > > Only when a BO is used by another client we have implicit synchronization > for all command submissions. This behavior can be disable with a flag during > BO creation. I'm only interested in the case of shared buffers. And for those you _do_ pessimistically assume that all access must be implicitly synced. Iirc amdgpu doesn't support EGL_ANDROID_native_fence_sync, so this makes sense that you don't bother with it. >> - as a consequence, amdgpu needs to pessimistically assume that all >> writes to shared buffer need to obey implicit fencing rules. >> - for shared buffers (across process or drivers) implicit fencing does >> _not_ allow concurrent writers. That limitation is why people want to >> do explicit fencing, and it's the reason why there's only 1 slot for >> an exclusive. Note I really mean concurrent here, a queue of in-flight >> writes by different batches is perfectly fine. But it's a fully >> ordered queue of writes. >> - but as a consequence of amdgpu's lack of implicit fencing and hence >> need to pessimistically assume there's multiple write fences amdgpu >> needs to put multiple fences behind the single exclusive slot. This is >> a limitation imposed by by the amdgpu stack, not something inherit to >> how implicit fencing works. >> - Chris Wilson's patch implements all this (and afaics with a bit more >> coffee, correctly). >> >> If you want to be less pessimistic in amdgpu for shared buffers, you >> need to start tracking which shared buffer access need implicit and >> which explicit sync. What you can't do is suddenly create more than 1 >> exclusive fence, that's not how implicit fencing works. Another thing >> you cannot do is force everyone else (in non-amdgpu or core code) to >> sync against _all_ writes, because that forces implicit syncing. Which >> people very much don't want. > > > I also do see the problem that most other hardware doesn't need that > functionality, because it is driven by a single engine. That's why I tried > to keep the overhead as low as possible. > > But at least for amdgpu (and I strongly suspect for nouveau as well) it is > absolutely vital in a number of cases to allow concurrent accesses from the > same client even when the BO is then later used with implicit > synchronization. > > This is also the reason why the current workaround is so problematic for us. > Cause as soon as the BO is shared with another (non-amdgpu) device all > command submissions to it will be serialized even when they come from the > same client. > > Would it be an option extend the concept of the "owner" of the BO amdpgu > uses to other drivers as well? > > When you already have explicit synchronization insider your client, but not > between clients (e.g. still uses DRI2 or DRI3), this could also be rather > beneficial for others as well. Again: How you synchronize your driver internal rendering is totally up to you. If you see an exclusive fence by amdgpu, and submit new rendering by amdgpu, you can totally ignore the exclusive fence. The only api contracts for implicit fencing are between drivers for shared buffers. If you submit rendering to a shared buffer in parallel, all without attaching an exclusive fence that's perfectly fine, but somewhen later on, depending upon protocol (glFlush or glxSwapBuffers or whatever) you have to collect all those concurrent write hazards and bake them into 1 single exclusive fence for implicit fencing. Atm (and Chris seems to concur) the amdgpu uapi doesn't allow you to do that, so for anything shared you have to be super pessimistic. Adding a HAND_OFF_FOR_IMPLICIT_FENCING flag/ioctl would probably fix that. Only when that flag is set would you take all shared write hazards and bake them into one exclusive fence for hand-off to the next driver. You'd also need the same when receiving an implicitly fenced buffer, to make sure that your concurrent writes do synchronize with reading (aka shared fences) done by other drivers. With a bunch of trickery and hacks it might be possible to infer this from current ioctls even, but you need to be really careful. And you're right that amdgpu seems to be the only (or one of the only) drivers which do truly concurrent rendering to the same buffer (not just concurrent rendering to multiple buffers all suballocated from the same bo). But we can't fix this in the kernel with the tricks you propose, because without such an uapi extension (or inference) we can't tell the implicit fencing from the explicit fencing case. And for shared buffers with explicit fencing we _must_ _not_ sync against all writes. owner won't help here, because it's still not tracking whether something is explicit or implicit synced. We've cheated a bit with most other drivers in this area, also because we don't have to deal with truly concurrent rendering. So it's not obvious that we're not tracking writes/reads, but implicit/explicit fencing. But semantically we track the later for shared buffers. Cheers, Daniel PS: One idea I have for inference: Every time you see a shared buffer in an amdgpu CS: 1. Grab reservation lock 2. Check all the fences' creators. If any of them are foreign (not by amdgpu), then run the current pessimistic code. 3. If all fences are by amdgpu - Look at the exclusive fence. If it's a ttm bo move keep it, if it's marked as a special implicit syncing fence, ignore it. - Run all CS concurrently by storing all their write fences in the shared slots. - Create a fake exclusive fence which ties all the write hazards into one fence. Mark them as special implicit syncing fences in your amdgpu_fence struct. This will make sure other drivers sync properly, but since you ignore these special it won't reduce amdgpu-internal concurrency. - Make sure you don't drop the ttm bo move fences accidentally, will be some tricky accounting. 4. Submit CS and drop reservation lock. I think this would work, but much cleaner if you make this an explicit part of the amgpu uapi. -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx