On Mon, Mar 11, 2013 at 08:29:59AM +0200, Terje Bergström wrote: > On 08.03.2013 22:43, Thierry Reding wrote: > > A bo is just a buffer object, so I don't see why the name shouldn't > > be used. The name is in no way specific to DRM or GEM. But the point > > that I was trying to make was that there is nothing to suggest that > > we couldn't use drm_gem_object as the underlying scaffold to base all > > host1x buffer objects on. > > > > Furthermore I don't understand why you've chosen this approach. It > > is completely different from what other drivers do and therefore > > makes it more difficult to comprehend. That alone I could live with > > if there were any advantages to that approach, but as far as I can > > tell there are none. > > I was following the plan we agreed on earlier in email discussion with > you and Lucas: > > On 29.11.2012 11:09, Lucas Stach wrote: > > We should aim for a clean split here. GEM handles are something which > > is really specific to how DRM works and as such should be constructed > > by tegradrm. nvhost should really just manage allocations/virtual > > address space and provide something that is able to back all the GEM > > handle operations. > > > > nvhost has really no reason at all to even know about GEM handles. > > If you back a GEM object by a nvhost object you can just peel out > > the nvhost handles from the GEM wrapper in the tegradrm submit ioctl > > handler and queue the job to nvhost using it's native handles. > > > > This way you would also be able to construct different handles (like > > GEM obj or V4L2 buffers) from the same backing nvhost object. Note > > that I'm not sure how useful this would be, but it seems like a > > reasonable design to me being able to do so. > > With this structure, we are already prepared for non-DRM APIs. Tt's a > matter of familiarity of code versus future expansion. Code paths for > both are as simple/complex, so neither has a direct technical > superiority in performance. > > I know other DRM drivers have opted to hard code GEM dependency > throughout the code. Then again, host1x hardware is managing much more > than graphics, so we need to think outside the DRM box, too. This sound a bit over-engineered at this point in time. DRM is currently the only user. Is anybody working on any non-DRM drivers that would use this? Even that aside, I don't think host1x_mem_handle is a good choice of name here. The objects are much more than handles. They are in fact buffer objects, which can optionally be attached to a handle. I also think that using a void * to store the handle specific data isn't such a good idea. So how about the following proposal, which I think might satisfy both of us: struct host1x_bo; struct host1x_bo_ops { struct host1x_bo *(*get)(struct host1x_bo *bo); void (*put)(struct host1x_bo *bo); dma_addr_t (*pin)(struct host1x_bo *bo, struct sg_table **sgt); ... }; struct host1x_bo *host1x_bo_get(struct host1x_bo *bo); void host1x_bo_put(struct host1x_bo *bo); dma_addr_t host1x_bo_pin(struct host1x_bo *bo, struct sg_table **sgt); ... struct host1x_bo { const struct host1x_bo_ops *ops; ... }; struct tegra_drm_bo { struct host1x_bo base; ... }; That way you can get rid of the host1x_memmgr_create_handle() helper and instead embed host1x_bo into driver-/framework-specific structures with the necessary initialization. It also allows you to interact directly with the objects instead of having to go through the memmgr API. The memory manager doesn't really exist anymore so keeping the name in the API is only confusing. Your current proposal deals with memory handles directly already so it's really just making the naming more consistent. Thierry
Attachment:
pgpfnKWBRsUmV.pgp
Description: PGP signature
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel