On Wed, 22 Apr 2015, Paul E. McKenney wrote: > I completely agree that some critically important use cases, such as > yours, will absolutely require that the application explicitly choose > memory placement and have the memory stay there. Most of what you are trying to do here is already there and has been done. GPU memory is accessible. NICs work etc etc. All without CAPI. What exactly are the benefits of CAPI? Is driver simplification? Reduction of overhead? If so then the measures proposed are a bit radical and may result in just the opposite. For my use cases the advantage of CAPI lies in the reduction of latency for coprocessor communication. I hope that CAPI will allow fast cache to cache transactions between a coprocessor and the main one. This is improving the ability to exchange data rapidly between a application code and some piece of hardware (NIC, GPU, custom hardware etc etc) Fundamentally this is currently an design issue since CAPI is running on top of PCI-E and PCI-E transactions establish a minimum latency that cannot be avoided. So its hard to see how CAPI can improve the situation. The new thing about CAPI are the cache to cache transactions and participation in cache coherency at the cacheline level. That is a different approach than the device memory oriented PCI transcactions. Perhaps even CAPI over PCI-E can improve the situation there (maybe the transactions are lower latency than going to device memory) and hopefully CAPI will not forever be bound to PCI-E and thus at some point shake off the shackles of a bus designed by a competitor. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>