Hi, I'm interested in several areas: * Next steps for the gup/dma work: pin_user_pages*() and related APIs. I'm pretty sure Jan Kara is going to propose that as a TOPIC, but if not, it's fine for the hallway and after hours discussion track. * GPU-centric memory management interests: * The topic areas that Jerome brought up are really important to me: Generic page protection, especially. Without those (or some other clever solution that maybe someone will dream up) there is no way to support atomic operations on memory that the CPU and GPU might both have mapped. * Peer-to-peer RDMA/migration * Representing device memory. (Maybe this means without struct pages.) * THP: modern GPUs love-love-love huge pages, and THP seems like The Way. So all things that make THP work better, especially THP migration, are of interest here. * Memory hinting and other ways of solving the problem of what to do upon a page fault (CPU or GPU page fault, actually): migrate? migrate peer to peer? What should map to where? Slightly richer information would help. This can easily be answered with device drivers and custom allocators, but for NUMA memory (malloc/mmap) it's still not all there. thanks, -- John Hubbard NVIDIA