On 09/16/2010 03:18 AM, Christopher Yeoh wrote:
On Wed, 15 Sep 2010 23:46:09 +0900 Bryan Donlan<bdonlan@xxxxxxxxx> wrote: > On Wed, Sep 15, 2010 at 19:58, Avi Kivity<avi@xxxxxxxxxx> wrote: > > > Instead of those two syscalls, how about a vmfd(pid_t pid, ulong > > start, ulong len) system call which returns an file descriptor that > > represents a portion of the process address space. You can then > > use preadv() and pwritev() to copy memory, and > > io_submit(IO_CMD_PREADV) and io_submit(IO_CMD_PWRITEV) for > > asynchronous variants (especially useful with a dma engine, since > > that adds latency). > > > > With some care (and use of mmu_notifiers) you can even mmap() your > > vmfd and access remote process memory directly. > > Rather than introducing a new vmfd() API for this, why not just add > implementations for these more efficient operations to the existing > /proc/$pid/mem interface? Perhaps I'm misunderstanding something here, but accessing /proc/$pid/mem requires ptracing the target process. We can't really have all these MPI processes ptraceing each other just to send/receive a message....
You could have each process open /proc/self/mem and pass the fd using SCM_RIGHTS.
That eliminates a race; with copy_to_process(), by the time the pid is looked up it might designate a different process.
-- error compiling committee.c: too many arguments to function -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>