On Mon, Apr 13, 2015 at 07:23:34PM +0200, Daniel Vetter wrote: > On Mon, Apr 13, 2015 at 04:52:17PM +0200, Christian König wrote: > > From: Christian König <christian.koenig@xxxxxxx> > > > > WIP patch which adds an user fence IOCTL. > > > > Signed-off-by: Christian König <christian.koenig@xxxxxxx> > > I've discussed userspace fences a lot with Jerome last XDC, so here's my > comments: > > My primary concern with mid-batch fences is that if we create real kernel > fences (which might even escape to other places using android syncpts or > dma-buf) then we end up relying upon correct userspace to not hang the > kernel, which isn't good. Yes i agree on that, solution i propose make sure that this can not happen. > > So imo any kind of mid-batch fence must be done completely in userspace > and never show up as a fence object on the kernel side. I thought that > just busy-spinning in userspace would be all that's needed, but adding an > ioctl to wait on such user fences seems like a nice idea too. On i915 we > even have 2 interrupt sources per ring, so we could split the irq > processing between kernel fences and userspace fences. Technicaly here the kernel does not allocate any object it just that kernel can enable GPU interrupt and thus wait "inteligently" until the GPU fire an interrupt telling us that it might be a good time to look at the fence value. So technicaly this ioctl is nothing more than a wait for irq and check memory value. > > One thing to keep in mind (I dunno radeon/ttm internals enough to know) is > to make sure that while being blocked for a userspace fence in the ioctl > you're not starving anyone else. But it doesn't look like you're holding > any reservation objects or something similar which might prevent > concurrent cs. Yes this is the discussion we are having, how to make sure that such ioctl would not block any regular processing so that it could not be abuse in anyway (well at least in anyway my devious imagination can think of right now :)). Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel