> Yeah, MAP_FIXED sounds a bit more ambitious and though I think it would > work for OCL 2.0 pointer sharing, it's a little different than we were planning. > To summarize, we have three possible approaches, each with its own > problems: > 1) simple patch to avoid binding at address 0 in PPGTT: > does impact the ABI (though generally not in a harmful way), and > may not be possible with aliasing PPGTT with e.g. framebuffers > bound at offset 0 > 2) exposing PIN_BIAS to userspace > Would allow userspace to avoid pinning any buffers at offset 0 at > execbuf time, but still has the problem with previously bound buffers > and aliasing PPGTT > 3) MAP_FIXED interface > Flexible approach allowing userspace to manage its own virtual > memory, but still has the same issues with aliasing PPGTT, and with > shared contexts, which would have to negotiate between libraries > how to > handle the zero page > > For (1) and (2) the kernel pieces are really already in place, the main thing we > need is a new flag to userspace to indicate behavior. I'd prefer (1) with a > context creation flag to indicate "don't bind at 0". > Execbuf would try to honor this, and userspace could check if any buffers > ended up at 0 in the aliasing PPGTT case by checking the resulting offsets > following the call. I expect in most cases this would be fine. > > It should be pretty easy to extend Ruiling's patch to use a context flag to > determine the behavior; is that something you can do? Any objections to > this approach? I am ok with adding a context flag to indicate "don't bind at 0". Any objections from others? The patch is not from me, it is from David. I am not familiar with KMD. David, could you help on this patch? > It does mean that shared contexts need to be handled specially, or won't get > the 0 page protection, but I think Mesa wants this behavior too, and libva > probably wouldn't mind, so you could just require new versions of those that > set this flag when telling people what's supported for proper NULL pointer > handling. > > Any objections to that approach? > > Thanks, > Jesse _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx