Quoting Matthew Auld (2020-06-04 14:37:40) > On Thu, 4 Jun 2020 at 11:38, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > > > Reduce the 3 relocation patches down to the single path that accommodates > > all. The primary motivation for this is to guard the relocations with a > > natural fence (derived from the i915_request used to write the > > relocation from the GPU). > > > > The tradeoff in using async gpu relocations is that it increases latency > > over using direct CPU relocations, for the cases where the target is > > idle and accessible by the CPU. The benefit is greatly reduced lock > > contention and improved concurrency by pipelining. > > > > Note that forcing the async gpu relocations does reveal a few issues > > they have. Firstly, is that they are visible as writes to gem_busy, > > causing to mark some buffers are being to written to by the GPU even > > though userspace only reads. Secondly is that, in combination with the > > cmdparser, they can cause priority inversions. This should be the case > > where the work is being put into a common workqueue losing our priority > > information and so being executed in FIFO from the worker, denying us > > the opportunity to reorder the requests afterwards. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx> Fwiw, if anyone else is as concerned about the priority inversions via the global system workqueues as am I, we need to teach the CPU scheduler about our priorities. I am considering a per-CPU kthread and plugging them into our scheduling backend. That should be then be applicable to all our async tasks (clflushing, binding, pages, random other tasks). The devil is in the details of course. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx