On Sat, Dec 05, 2015 at 11:02:09AM +0000, Russell King - ARM Linux wrote: > On Sat, Dec 05, 2015 at 11:12:08AM +0100, Daniel Vetter wrote: > > Given that I think the current etnaviv is a sound architecture. And I'm > > not saying that because drm requires everything to be smashed into one > > driver, since that's simple not the case. > > There's other reasons as well, mostly from the performance point of view. > Having separate DRM devices for each GPU means you need to use dmabuf to > share buffers across the GPUs. This brings with it several kinds of > overhead: > > 1. having more fd usage in the client programs. > 2. having more memory usage as a result. > 3. having more locks, due to more object lists. > 4. having the overhead from the DMA API when importing buffers between > different GPU nodes. > > From my performance measurements over the last month, the top hit is > currently from the DMA API, so having to export and import buffers > between different GPU devices is just going to make that worse. Yeah dma-api for shared buffers is currently not up to the challenge, we had the same problem with the intel driver. Big one is that gpus tend to do cache management themselves, or at least all need cpu caches to be flushed. But if you have multiple drivers using the same memory nothing currently keeps track of whether caches are flushed already or not, so you end up with double or worse flushing of your buffers. Which totally kills performance. In theory dma-buf could keep track of who's flushed a buffer already, but there's no implementation of that yet. And for a generic one we'd need to violate the current dma api abstractions. So yeah, perf is going to tank until that's solved, at least for some workloads. Video wasn't a problem here since all you do is establish a set of shared buffers once, making all the overhead just one-time. But dynamic workloads like GL can't amortize setup cost that easily. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel