Re: Regarding HMM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 18-08-2020 10:36 pm, Ralph Campbell wrote:

On 8/18/20 12:15 AM, Valmiki wrote:
Hi All,

Im trying to understand heterogeneous memory management, i have following doubts.

If HMM is being used we dont have to use DMA controller on device for memory transfers ? Without DMA if software is managing page faults and migrations, will there be any performance impacts ?

Is HMM targeted for any specific use cases where DMA controller is not there on device ?

Regards,
Valmiki


There are two APIs that are part of "HMM" and are independent of each other.

hmm_range_fault() is for getting the physical address of a system resident memory page that a device can map but is not pinned in the usual way I/O increases the page reference count to pin the page. The device driver has to handle invalidation callbacks to remove the device
mapping. This lets the device access the page without moving it.

migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() are used by the device driver to migrate data to device private memory. After migration, the system memory is freed and the CPU page table holds an invalid PTE that points to the device private struct page (similar to a swap PTE). If the CPU process faults on that address, there is a callback to the driver to migrate it back to system memory. This is where device DMA engines can
be used to copy data to/from system memory and device private memory.

The use case for the above is to be able to run code such as OpenCL on GPUs and CPUs using the same virtual addresses without having to call special memory allocators.
In other words, just use mmap() and malloc() and not clSVMAlloc().

There is a performance consideration here. If the GPU accesses the data over PCIe to system memory, there is much less bandwidth than accessing local GPU memory. If the data is to be accessed/used many times, it can be more efficient to migrate the data to local GPU memory. If the data is only accessed a few times, then it is probably
more efficient to map system memory.
Thanks Ralph for the clarification.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux