Am Mittwoch, den 28.11.2012, 16:45 +0200 schrieb Terje Bergström: > On 28.11.2012 16:06, Lucas Stach wrote: > > Why do even need/use dma-buf for this use case? This is all one DRM > > device, even if we separate host1x and gr2d as implementation modules. > > I didn't want to implement dependency to drm gem objects in nvhost, but > we have thought about doing that. dma-buf brings quite a lot of > overhead, so implementing support for gem buffers would make the > sequence a bit leaner. > > nvhost already has infra to support multiple memory managers. > To be honest I still don't grok all of this, but nonetheless I try my best. Anyway, shouldn't nvhost be something like an allocator used by host1x clients? With the added ability to do relocs/binding of buffers into client address spaces, refcounting buffers and import/export dma-bufs? In this case nvhost objects would just be used to back DRM GEM objects. If using GEM objects in the DRM driver introduces any cross dependencies with nvhost, you should take a step back and ask yourself if the current design is the right way to go. > > So standard way of doing this is: > > 1. create gem object for pushbuffer > > 2. create fake mmap offset for gem obj > > 3. map pushbuf using the fake offset on the drm device > > 4. at submit time zap the mapping > > > > You need this logic anyway, as normally we don't rely on userspace to > > sync gpu and cpu, but use the kernel to handle the concurrency issues. > > Taking a step back - 2D streams are actually very short, in the order of > <100 bytes. Just copying them to kernel space would actually be faster > than doing MMU operations. > Is this always the case because of the limited abilities of the gr2d engine, or is it just your current driver flushing the stream very often? > I think for Tegra20 and non-IOMMU case, we just need to copy the command > stream to kernel buffer. In Tegra30 IOMMU case reference to user space > buffers are fine, as tampering the streams doesn't have any ill effects. > In which way is it a good design choice to let the CPU happily alter _any_ buffer the GPU is busy processing without getting the concurrency right? Please keep in mind that the interfaces you are now trying to introduce have to be supported for virtually unlimited time. You might not be able to scrub your mistakes later on without going through a lot of hassles. To avoid a lot of those mistakes it might be a good idea to look at how other drivers use the DRM infrastructure and only part from those proven schemes where really necessary/worthwhile. Regards, Lucas _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel