Re: [PATCH v11 00/15] HMM (Heterogeneous Memory Management)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/10/2015 23:59, Jérôme Glisse wrote:
> HMM (HMM (Heterogeneous Memory Management) is an helper layer
> for device driver, its main features are :
>    - Shadow CPU page table of a process into a device specific
>      format page table and keep both page table synchronize.
>    - Handle DMA mapping of system ram page on behalf of device
>      (for shadowed page table entry).
>    - Migrate private anonymous memory to private device memory
>      and handle CPU page fault (which triggers a migration back
>      to system memory so CPU can access it).
> 
> Benefits of HMM :
>    - Avoid current model where device driver have to pin page
>      which blocks several kernel features (KSM, migration, ...).
>    - No impact on existing workload that do not use HMM (it only
>      adds couple more if() to common code path).
>    - Intended as common infrastructure for several different
>      hardware, as of today Mellanox and NVidia.
>    - Allow userspace API to move away from explicit copy code
>      path where application programmer has to manage manually
>      memcpy to and from device memory.
>    - Transparent to userspace, for instance allowing library to
>      use GPU without involving application linked against it.
> 
> I expect other hardware company to express interest in HMM and
> eventualy start using it with their new hardware. I give a more
> in depth motivation after the change log.

The RDMA stack had IO paging support since kernel v4.0, using the
mmu_notifier APIs to interface with the mm subsystem. As one may expect,
it allows RDMA applications to decrease the amount of memory that needs
to be pinned, and allows the kernel to better allocate physical memory.
HMM looks like a better API than mmu_notifiers for that purpose, as it
allows sharing more code. It handles internally things that any similar
driver or subsystem would need to do, such as synchronization between
page fault events and invalidations, and DMA-mapping pages for device
use. It looks like it can be extended to also assist in device peer to
peer memory mapping, to allow capable devices to transfer data directly
without CPU intervention.

Regards,
Haggai

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]