On Wed, Sep 3, 2014 at 9:01 PM, Jesse Barnes <jbarnes@xxxxxxxxxxxxxxxx> wrote: > On Wed, 3 Sep 2014 17:08:53 +0100 > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: >> On Wed, Sep 03, 2014 at 08:41:06AM -0700, Jesse Barnes wrote: >> > On Wed, 3 Sep 2014 08:01:55 +0100 >> > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: >> > >> > > These commands are illegal/invalid inside the object, only valid inside >> > > the ring. >> > >> > Hm, we ought to be able to write to no privileged space with >> > STORE_DWORD, but that does mean moving to context specific pages in >> > process space, or at least adding them to our existing scheme. >> >> The per-process context page also doesn't exist generically. I certainly >> hope that userspace can't overwrite the hws! Imagine if we were using >> that for interrupt status reads, or seqno tracking... > > Yeah I'm thinking of an additional hws that's per-context and userspace > mappable. It could come in handy for userspace only sync stuff. Userspace can already do seqno writes with MI_FLUSH_DW or PIPE_CONTROL - lots of igt tests actually do that for correctness checks. So the only thing really is interrupts, and I think for that we really want the full request tracking machinery in the kernel (otherwise I fear we'll have even more fun with lost/spurious interrupts since the hw guys just seem to not be able to get that right). Which means a full batch split. I have no idea how that's supposed to work when userspace does direct hardware submission. But that's kinda a good reason not to do that anyway, and at least for now it looks like direct hw submission is for opencl2 only with interop with other devices (where sync matters) not a use-case. For interop with other processes the gpu can always do a seqno write to some shared page. And busy-looping, but apparently that's what people want for low-latency. Or at least what designers seem to think people want ... -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx