On 04/21/2015 08:50 PM, Christoph Lameter wrote: > On Tue, 21 Apr 2015, Jerome Glisse wrote: >> So big use case here, let say you have an application that rely on a >> scientific library that do matrix computation. Your application simply >> use malloc and give pointer to this scientific library. Now let say >> the good folks working on this scientific library wants to leverage >> the GPU, they could do it by allocating GPU memory through GPU specific >> API and copy data in and out. For matrix that can be easy enough, but >> still inefficient. What you really want is the GPU directly accessing >> this malloced chunk of memory, eventualy migrating it to device memory >> while performing the computation and migrating it back to system memory >> once done. Which means that you do not want some kind of filesystem or >> anything like that. > > With a filesystem the migration can be controlled by the application. Which is absolutely the wrong thing to do when using the "GPU" (or whatever co-processor it is) transparently from libraries, without the applications having to know about it. Your use case is legitimate, but so is this other case. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>