RE: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory pin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Matthew Wilcox [mailto:willy@xxxxxxxxxxxxx]
> Sent: Monday, February 8, 2021 10:34 AM
> To: Wangzhou (B) <wangzhou1@xxxxxxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx;
> linux-mm@xxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> linux-api@xxxxxxxxxxxxxxx; Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>;
> Alexander Viro <viro@xxxxxxxxxxxxxxxxxx>; gregkh@xxxxxxxxxxxxxxxxxxx; Song
> Bao Hua (Barry Song) <song.bao.hua@xxxxxxxxxxxxx>; jgg@xxxxxxxx;
> kevin.tian@xxxxxxxxx; jean-philippe@xxxxxxxxxx; eric.auger@xxxxxxxxxx;
> Liguozhu (Kenneth) <liguozhu@xxxxxxxxxxxxx>; zhangfei.gao@xxxxxxxxxx;
> chensihang (A) <chensihang1@xxxxxxxxxxxxx>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
> 
> On Sun, Feb 07, 2021 at 04:18:03PM +0800, Zhou Wang wrote:
> > SVA(share virtual address) offers a way for device to share process virtual
> > address space safely, which makes more convenient for user space device
> > driver coding. However, IO page faults may happen when doing DMA
> > operations. As the latency of IO page fault is relatively big, DMA
> > performance will be affected severely when there are IO page faults.
> > >From a long term view, DMA performance will be not stable.
> >
> > In high-performance I/O cases, accelerators might want to perform
> > I/O on a memory without IO page faults which can result in dramatically
> > increased latency. Current memory related APIs could not achieve this
> > requirement, e.g. mlock can only avoid memory to swap to backup device,
> > page migration can still trigger IO page fault.
> 
> Well ... we have two requirements.  The application wants to not take
> page faults.  The system wants to move the application to a different
> NUMA node in order to optimise overall performance.  Why should the
> application's desires take precedence over the kernel's desires?  And why
> should it be done this way rather than by the sysadmin using numactl to
> lock the application to a particular node?

NUMA balancer is just one of many reasons for page migration. Even one
simple alloc_pages() can cause memory migration in just single NUMA
node or UMA system.

The other reasons for page migration include but are not limited to:
* memory move due to CMA
* memory move due to huge pages creation

Hardly we can ask users to disable the COMPACTION, CMA and Huge Page
in the whole system.

On the other hand, numactl doesn't always bind memory to single NUMA
node, sometimes, while applications require many cpu, it could bind
more than one memory node.

Thanks
Barry






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux