Historically, computers have sped up memory accesses by either adding cache (or cache layers), or by moving to faster memory technologies (like the DDR3 to DDR4 transition). Today we are seeing new types of memory being exposed not as caches, but as RAM [1]. I'd like to discuss how the NUMA APIs are being reused to manage not just the physical locality of memory, but the various types. I'd also like to discuss the parts of the NUMA API that are a bit lacking to manage these types, like the inability to have fallback lists based on memory type instead of location. I believe this needs to be a distinct discussion from Jerome's HMM topic. All of the cases we care about are cache-coherent and can be treated as "normal" RAM by the VM. The HMM model is for on-device memory and is largely managed outside the core VM. I'd like to attend to discuss any of the performance and swap topics, as well as the ZONE_DEVICE and HMM discussions. 1. https://software.intel.com/en-us/articles/mcdram-high-bandwidth-memory-on-knights-landing-analysis-methods-tools -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>