On 2023-08-21 15:41, Zeng, Oak wrote:
I have thought about emulating BO allocation APIs on top of system SVM.
This was in the context of KFD where memory management is not tied into
command submissions APIs, which would add a whole other layer of
complexity. The main unsolved (unsolvable?) problem I ran into was, that
there is no way to share SVM memory as DMABufs. So there is no good way
to support applications that expect to share memory in that way.
Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf is a particular technology created specially for the BO driver (and other driver) to share buffer b/t devices. Hmm/system SVM doesn't need this technology: malloc'ed memory by the nature is already shared b/t different devices (in one process) and CPU. We just can simply submit GPU kernel to all devices with malloc'ed memory and let kmd decide the memory placement (such as map in place or migrate). No need of buffer export/import in hmm/system SVM world.
I disagree. DMABuf can be used for sharing memory between processes. And
it can be used for sharing memory with 3rd-party devices via PCIe P2P
(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory.
POSIX IPC requires that you know that you'll be sharing the memory at
allocation time. It adds overhead. And because it's file-backed, it's
currently incompatible with migration. And HMM currently doesn't have a
solution for P2P. Any access by a different device causes a migration to
system memory.
Regards,
Felix
So yes from buffer sharing perspective, the design philosophy is also very different.
Thanks,
Oak