> It looks like the driver might be allocated a depth or stencil buffer > without knowing whether it will be used. The buffer is then "grown" if > it is needed by the GPU. The problem is it is possible for the > application to access it later. Hmm. I "think" you should be able to use a dummy (unbacked) Z/S buffer when it won't be used, and as soon as the *driver* decides it will be used (e.g. by setting the MALI_MFBD_DEPTH_WRITE bit), *that* is when you allocate a real memory-backed BO. Neither case needs to be growable; growable just pushes the logic into kernelspace (instead of handling it in userspace). The only wrinkle is if you need to give out addresses a priori, but that could be solved by a mechanism to mmap a BO to a particular CPU address, I think. (I recall MEM_ALIAS in kbase might be relevant?) > * Use HMM: CPU VA==GPU VA. This nicely solves the problem, but falls over > badly when the GPU VA size is smaller than the user space VA size - which is > sadly true on many 64 bit integrations. > > * Provide an allocation flag which causes the kernel driver to not pick a > GPU address until the buffer is mapped on the CPU. The mmap callback would > then need to look for a region that is free on both the CPU and GPU. > > The second is obviously most similar to the kbase approach. kbase simplifies > things because the kernel driver has the ultimate say over whether the > buffer is SAME_VA or not. So on 64 bit user space we upgrade everything to > be SAME_VA - which means the GPU VA space just follows the CPU VA (similar > to HMM). I'll let Rob chime in on this one. Thank you for the detailed write-up! -Alyssa
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel