Adding it to the GPU's DRM requires user-space to jump through quite a lot of hoops: In order to get both the scan-out GEM buffers and DRI2 GEM buffers in a single device's name-space, it would have to use PRIME to export, say dumb scan-out buffers from the display's DRM as dma_buf fds, then import those dma_buf fds into the GPU's DRM and then use flink to give those imported scan-out buffers a name in the GPU's DRM's namespace. Yuck. No, I think it is easier to just add allocating GPU DRI2 buffers as a device-specific ioctl on the display controller's DRM. Indeed, this appears to be what OMAP and Exynos DRM drivers (and maybe others) do. One device does all the allocations and thus all buffers are already in the same namespace, no faffing with exporting & importing buffers in the DDX required. We will need to figure out a way in the xf86-video-armsoc DDX to abstract those driver-specific allocation ioctls. Using GBM is an interesting idea - looking at the interface it seems to be very, _very_ similar to Android's gralloc! Though I don't see how to get a system-wide name for a buffer I can pass back to a client via DRI2? I assume gbm_bo_handle is process-local? In the short term, I think we'll just use run-time detection of the underlying DRM and bake knowledge of specific DRMs into the DDX. Anyway, I think we have a conclusion on the "how to allocate buffers for the X.Org/DRI stack" question. However, there are more advanced use-cases like streaming from v4l2 to DRM for which I remain convinced the current allocation model really doesn't work for. Or at a minimum causes significant code duplication in many DRM drivers and forces lots of per-SoC code in userspace which could otherwise be avoided. However, I'm keen to not go round in any more circles on this mail thread and suggest we defer that conversation to Linux Plumbers. :-) Cheers, Tom _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel