Role of DMA Heaps vs GEM in allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.

However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.

My question is then: what is the role of DMA Heaps? If it is to be used as a central allocator, should the FD issue be left to the application, or addressed somehow? Should it be considered a potential alternative for GEM allocations?

Thanks,
Mikko

[1] https://www.spinics.net/lists/dri-devel/msg262021.html



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux