On Mon, Feb 26, 2018 at 09:38:19AM -0700, Nathan Hjelm wrote: > All MPI implementations have support for using CMA to transfer data > between local processes. The performance is fairly good (not as good as > XPMEM) but the interface limits what we can do with to remote process > memory (no atomics). I have not heard about this new proposal. What is > the benefit of the proposed calls over the existing calls? The proposed system call call that combines functionality of process_vm_read and vmsplice [1] and it's particularly useful when one needs to read the remote process memory and then write it to a file descriptor. In this case a sequence of process_vm_read() + write() calls that involves two copies of data can be replaced with process_vm_splice() + splice() which does not involve copy at all. [1] https://lkml.org/lkml/2018/1/9/32 > -Nathan > > > On Feb 26, 2018, at 2:02 AM, Pavel Emelyanov <xemul@xxxxxxxxxxxxx> wrote: > > > > On 02/21/2018 03:44 AM, Andrew Morton wrote: > >> On Tue, 9 Jan 2018 08:30:49 +0200 Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx> wrote: > >> > >>> This patches introduces new process_vmsplice system call that combines > >>> functionality of process_vm_read and vmsplice. > >> > >> All seems fairly strightforward. The big question is: do we know that > >> people will actually use this, and get sufficient value from it to > >> justify its addition? > > > > Yes, that's what bothers us a lot too :) I've tried to start with finding out if anyone > > used the sys_read/write_process_vm() calls, but failed :( Does anybody know how popular > > these syscalls are? If its users operate on big amount of memory, they could benefit from > > the proposed splice extension. > > > > -- Pavel -- Sincerely yours, Mike.